Build a workflow
- Open Workflows in your project and click Create Workflow.
- Enter a name. Choose a consistent convention such as
team-purpose-environment. - Select the primary provider and model. Optional fallbacks let you specify secondary and tertiary options.
- Attach a structured output for strict JSON responses.
- Add cache fields (comma or newline separated). Use dot notation for nested data (
metadata.segment) and indexes for arrays (messages.0.content). Use the connected nodes icon in the sidebar to open Workflows; the form repeats the provider (server glyph) and model (chip icon) selectors so you recognise them from other views.
Customer data
- Cached fields appear in API responses under
customer_dataand in the Request Logs table. - Use them to surface business identifiers (user IDs, segments, experiment buckets) without storing full payloads elsewhere.
- Customer data does not replace your persistence layer; treat it as a convenient echo for observability.
Structured outputs
- Attach any JSON schema to guarantee the shape of model responses.
- Include examples in the schema to guide providers. ModelRiver validates and merges responses so required fields are always present.
- If a provider is down or times out, ModelRiver can synthesise a schema-compliant fallback, flagged via
meta.offline_fallback = true.
Fallback behaviour
- ModelRiver automatically retries with the backups you configure when the primary provider fails or times out.
- Every attempt appears in
meta.attempts, including reasons for failures when available. - Use backups to mix vendors (for example, primary OpenAI, fallback Anthropic) or to tier model sizes.
Test Mode
Test Mode allows you to integrate and test your application without making actual AI provider calls or consuming your request quota.
How it works
- When creating or editing a workflow, switch the mode toggle from Production to Testing.
- In Test Mode, you must select a Structured Output that contains sample data.
- Optionally configure a Response Delay (in milliseconds) to simulate API latency.
- When you make API requests to a workflow in Test Mode:
- The sample data from the structured output is returned immediately (after any configured delay)
- No AI providers are called
- No requests are logged or counted against your quota
- The response format matches what you'd receive from a real AI provider
Use cases for Test Mode
- Integration testing: Test your application's AI integration without consuming API credits
- Development environments: Develop and debug with predictable responses
- CI/CD pipelines: Run automated tests against your API without external dependencies
- Demo environments: Showcase functionality without incurring costs or needing API keys
Response format
Test Mode responses include:
- The sample data from your structured output
- Cached fields (if configured) echoed from your request
- Metadata indicating
test_mode: trueandprovider: "Testing" - Standard token usage fields (set to 0)
- Compatible with both
rawandwrappedresponse formats
Best practices
- Always define sample data that matches your schema structure
- Use realistic sample data to better simulate production behavior
- Configure response delays to match expected AI provider latency
- Switch workflows back to Production mode before deploying to production environments
Test a workflow
- Open the Playground from the project sidebar (Projects → Playground).
- Pick a workflow from the dropdown.
- Provide a JSON payload containing the fields your workflow expects—commonly a
messagesarray and optionalmetadata. - Click Send Request. ModelRiver executes the workflow without persisting a log entry, and the response appears inline.
- Iterate on prompts, cache fields, or structured outputs based on the preview without impacting production analytics. The play button icon in the sidebar marks the Playground entry point; inside you'll see the same connected nodes icon on the workflow selector so you know you're running an existing configuration.
When to create multiple workflows
- Different use cases or product features require distinct prompts or structured outputs.
- You want to A/B test providers or models—create dedicated workflows for each variant.
- You serve multiple customer tiers with different fallbacks or cost considerations.
Best practices
- Keep workflow names human-readable; they appear in API responses and logs.
- Document the intent of each workflow within your team’s runbooks or inside the description field.
- Monitor fallback frequency—if backups trigger frequently, adjust primary providers or prompts.
- Use the Workflow Playground before promoting changes to production.