Documentation

Test mode

Integrate and test your application with predictable responses, zero provider calls, and no quota consumption.

What is test mode?

Test mode allows you to validate your ModelRiver integration without making actual AI provider calls or consuming your request quota. It returns the sample data from your structured output configuration, giving you predictable, repeatable responses for development and testing.

How it works

  1. When creating or editing a workflow, switch the mode toggle from Production to Testing.
  2. In test mode, you must select a Structured Output that contains sample data.
  3. Optionally configure a Response Delay (in milliseconds) to simulate API latency.
  4. When you make API requests to a workflow in test mode:
    • The sample data from the structured output is returned immediately (after any configured delay)
    • No AI providers are called
    • No requests are logged or counted against your quota
    • The response format matches what you'd receive from a real AI provider

Use cases

Use caseBenefit
Integration testingTest your application's AI integration without consuming API credits
Development environmentsDevelop and debug with predictable responses
CI/CD pipelinesRun automated tests against your API without external dependencies
Demo environmentsShowcase functionality without incurring costs or needing API keys
Load testingValidate throughput handling without provider rate limits

Response format

Test mode responses include:

  • The sample data from your structured output
  • Cached fields (if configured) echoed from your request
  • Metadata indicating test_mode: true and provider: "Testing"
  • Standard token usage fields (set to 0)
  • Compatible with both raw and wrapped response formats

Example response

JSON
1{
2 "data": {
3 "summary": "Sample summary text",
4 "sentiment": "positive",
5 "category": "support"
6 },
7 "meta": {
8 "provider": "Testing",
9 "model": "test-mode",
10 "test_mode": true,
11 "tokens": {
12 "prompt": 0,
13 "completion": 0,
14 "total": 0
15 },
16 "duration_ms": 150
17 },
18 "customer_data": {
19 "user_id": "usr_test123"
20 }
21}

Best practices

  • Always define sample data that matches your schema structure
  • Use realistic sample data to better simulate production behavior
  • Configure response delays to match expected AI provider latency (typically 500-3000ms)
  • Switch workflows back to Production mode before deploying to production environments
  • Use test mode in CI/CD to validate end-to-end request handling without flaky provider dependencies

Next steps