What is test mode?
Test mode allows you to validate your ModelRiver integration without making actual AI provider calls or consuming your request quota. It returns the sample data from your structured output configuration, giving you predictable, repeatable responses for development and testing.
How it works
- When creating or editing a workflow, switch the mode toggle from Production to Testing.
- In test mode, you must select a Structured Output that contains sample data.
- Optionally configure a Response Delay (in milliseconds) to simulate API latency.
- When you make API requests to a workflow in test mode:
- The sample data from the structured output is returned immediately (after any configured delay)
- No AI providers are called
- No requests are logged or counted against your quota
- The response format matches what you'd receive from a real AI provider
Use cases
| Use case | Benefit |
|---|---|
| Integration testing | Test your application's AI integration without consuming API credits |
| Development environments | Develop and debug with predictable responses |
| CI/CD pipelines | Run automated tests against your API without external dependencies |
| Demo environments | Showcase functionality without incurring costs or needing API keys |
| Load testing | Validate throughput handling without provider rate limits |
Response format
Test mode responses include:
- The sample data from your structured output
- Cached fields (if configured) echoed from your request
- Metadata indicating
test_mode: trueandprovider: "Testing" - Standard token usage fields (set to 0)
- Compatible with both
rawandwrappedresponse formats
Example response
JSON
1{2 "data": {3 "summary": "Sample summary text",4 "sentiment": "positive",5 "category": "support"6 },7 "meta": {8 "provider": "Testing",9 "model": "test-mode",10 "test_mode": true,11 "tokens": {12 "prompt": 0,13 "completion": 0,14 "total": 015 },16 "duration_ms": 15017 },18 "customer_data": {19 "user_id": "usr_test123"20 }21}Best practices
- Always define sample data that matches your schema structure
- Use realistic sample data to better simulate production behavior
- Configure response delays to match expected AI provider latency (typically 500-3000ms)
- Switch workflows back to Production mode before deploying to production environments
- Use test mode in CI/CD to validate end-to-end request handling without flaky provider dependencies
Next steps
- Build a workflow: Create and configure workflows
- Structured outputs: Define sample data schemas
- API documentation: Understand request and response formats