Workflows support different request types to handle various AI operations. The request_type field determines the expected message format and API endpoint usage.
Available request types
Chat (default)
Standard chat completions for conversational AI.
Use cases: Conversational assistants, Q&A systems, multi-turn dialogues, chatbots
1{2 "workflow": "my-chat-workflow",3 "messages": [4 {"role": "system", "content": "You are a helpful assistant."},5 {"role": "user", "content": "Hello, how are you?"}6 ],7 "temperature": 0.7,8 "max_tokens": 10009}Supported providers:
- OpenAI (gpt-4o, gpt-4-turbo, gpt-3.5-turbo, etc.)
- Anthropic (claude-3-5-sonnet, claude-3-opus, etc.)
- xAI (Grok)
- Mistral (Mistral Large, Mixtral)
- Google (Gemini)
Completion
Text completions for autocomplete and single-shot text generation.
Use cases: Code completion, text autocomplete, prompt-based text generation
1{2 "workflow": "my-completion-workflow",3 "prompt": "Once upon a time in a galaxy far far away",4 "max_tokens": 500,5 "temperature": 0.86}Supported providers:
- OpenAI (gpt-3.5-turbo-instruct, text-davinci-003, etc.)
Image
Image generation from text descriptions.
Use cases: Creating images from text prompts, visual content generation, design mockups, concept art
1{2 "workflow": "my-image-workflow",3 "prompt": "A beautiful sunset over mountains with a lake in the foreground",4 "size": "1024x1024",5 "quality": "hd",6 "n": 17}Supported providers:
- OpenAI (dall-e-3, dall-e-2)
- Stability AI (stable-diffusion-xl, etc.)
Response:
1{2 "data": {3 "created": 1234567890,4 "data": [5 {6 "url": "https://...",7 "revised_prompt": "..."8 }9 ]10 }11}Embedding
Generate vector embeddings for text: essential for semantic search, RAG, and clustering.
Use cases: Semantic search, text similarity, clustering, recommendation systems, RAG (Retrieval Augmented Generation)
Single text:
1{2 "workflow": "my-embedding-workflow",3 "input": "The quick brown fox jumps over the lazy dog",4 "encoding_format": "float"5}Batch (multiple texts):
1{2 "workflow": "my-embedding-workflow",3 "input": [4 "First text to embed",5 "Second text to embed",6 "Third text to embed"7 ]8}Supported providers:
- OpenAI (text-embedding-3-large, text-embedding-3-small, text-embedding-ada-002)
- Cohere (embed-english-v3.0, embed-multilingual-v3.0)
Response:
1{2 "data": {3 "object": "list",4 "data": [5 {6 "object": "embedding",7 "embedding": [0.123, -0.456, 0.789],8 "index": 09 }10 ],11 "model": "text-embedding-3-large",12 "usage": {13 "prompt_tokens": 8,14 "total_tokens": 815 }16 }17}Audio
Audio transcription, translation, and text-to-speech generation.
Use cases: Speech-to-text transcription, audio translation, text-to-speech, voice generation
Transcription:
1{2 "workflow": "my-audio-workflow",3 "file": "base64_encoded_audio_data",4 "model": "whisper-1",5 "response_format": "json",6 "language": "en"7}Text-to-speech:
1{2 "workflow": "my-tts-workflow",3 "input": "Hello, this is a test of text to speech.",4 "voice": "alloy",5 "response_format": "mp3"6}Supported providers:
- OpenAI (whisper-1, tts-1, tts-1-hd)
- ElevenLabs (various voices)
Vision
Image analysis and understanding.
Use cases: Image description, OCR, object detection, visual question answering, image classification
Base64 image:
1{2 "workflow": "my-vision-workflow",3 "messages": [4 {5 "role": "user",6 "content": [7 {"type": "text", "text": "What's in this image?"},8 {9 "type": "image_url",10 "image_url": {"url": "data:image/jpeg;base64,..."}11 }12 ]13 }14 ]15}URL-based image:
1{2 "workflow": "my-vision-workflow",3 "messages": [4 {5 "role": "user",6 "content": [7 {"type": "text", "text": "Describe this image in detail"},8 {9 "type": "image_url",10 "image_url": {11 "url": "https://example.com/image.jpg",12 "detail": "high"13 }14 }15 ]16 }17 ]18}Supported providers:
- OpenAI (gpt-4-vision-preview, gpt-4o)
- Anthropic (claude-3-opus, claude-3-5-sonnet with vision)
Creating workflows with request types
Via the console
- Open Workflows in your project
- Click Create Workflow
- Select the Request Type from the dropdown
- Choose a provider and model that supports your selected type
- Save the workflow
Via the API
POST /api/console/projects/:project_id/workflowsContent-Type: application/json { "name": "my_image_generator", "description": "Generate images from text descriptions", "provider": "openai", "model": "dall-e-3", "request_type": "image", "backup_1_provider": "stability", "backup_1_model": "stable-diffusion-xl"}Validation
The system validates that:
- The
request_typematches the expected format for the provider - The message structure conforms to the request type requirements
- The selected model supports the specified request type
Example error:
1{2 "error": "Invalid request format",3 "message": "Request type 'image' requires 'prompt' field, not 'messages'",4 "hint": "See /docs/api/request-types for correct format"5}Default behaviour
If no request_type is specified when creating a workflow, it defaults to "chat". Existing workflows without a request_type automatically use "chat" for backward compatibility.
Quick reference
| Type | Key field | Typical models |
|---|---|---|
chat | messages | GPT-4o, Claude 3.5, Gemini |
completion | prompt | GPT-3.5-instruct |
image | prompt + size | DALL-E 3, Stable Diffusion |
embedding | input | text-embedding-3-large |
audio | file or input | Whisper, TTS-1 |
vision | messages (with image) | GPT-4o, Claude 3.5 |
Best practices
- Match request type to use case: Choose the most appropriate type for your application
- Configure fallbacks: Set backup providers that support the same request type
- Validate inputs: Ensure application sends the correct message format for the type
- Use structured outputs: Combine request types with structured outputs for consistent responses
- Monitor usage: Track which request types consume the most tokens/resources
Next steps
- Response formats: Choose raw or wrapped format
- Endpoints: Full endpoint reference
- Workflows: Configure workflows in the console