ModelRiver uses structured error responses to help you quickly identify and resolve issues. Errors follow different formats depending on which API you use.
OpenAI-compatible error format
When using the OpenAI compatibility layer, errors follow the standard OpenAI error envelope:
JSON
1{2 "error": {3 "message": "Workflow 'xyz' does not exist. Create it in the console first.",4 "type": "invalid_request_error",5 "param": "model",6 "code": "model_not_found"7 }8}Error codes
| HTTP | Code | Meaning |
|---|---|---|
| 400 | invalid_request_error | Missing or invalid parameters |
| 401 | authentication_error | Invalid or missing API key |
| 404 | model_not_found | Workflow doesn't exist |
| 400 | model_not_supported | Workflow is event-driven (use async API) |
| 429 | rate_limit_exceeded | Too many requests: implement backoff |
| 502 | upstream_error | Provider returned an error |
| 503 | service_unavailable | Feature disabled or internal failure |
Native API error format
When using the native ModelRiver API (/v1/ai), errors are returned in wrapped format:
JSON
1{2 "data": null,3 "customer_data": {},4 "error": {5 "message": "Provider request failed",6 "details": {"status": 504, "message": "Upstream timeout"}7 },8 "meta": {9 "status": "error",10 "http_status": 502,11 "workflow": "marketing-summary",12 "attempts": [13 {"provider": "openai", "model": "gpt-4o-mini", "status": "error", "reason": "timeout"}14 ]15 }16}Note: ModelRiver returns
200with anerrorobject when the request was valid but the provider failed. Transport/authentication problems return standard HTTP status codes (401,403,429,5xx).
Common error scenarios
Authentication errors (401)
JSON
1{2 "error": {3 "message": "Invalid API key provided",4 "type": "authentication_error",5 "code": "invalid_api_key"6 }7}Solutions:
- Verify your API key starts with
mr_live_and hasn't expired - Check the key hasn't been rotated or revoked
- Ensure the
Authorizationheader usesBearerprefix
Workflow not found (404)
JSON
1{2 "error": {3 "message": "Workflow 'my-workflow' does not exist. Create it in the console first.",4 "type": "invalid_request_error",5 "param": "model",6 "code": "model_not_found"7 }8}Solutions:
- Verify the workflow name matches exactly (case-sensitive)
- Check the workflow exists in the correct project
- Ensure the API key belongs to the project containing the workflow
Event-driven workflow error (400)
JSON
1{2 "error": {3 "message": "Workflow 'my-workflow' is event-driven and cannot be used with the chat completions endpoint. Use the async API instead.",4 "type": "invalid_request_error",5 "code": "model_not_supported"6 }7}Solutions:
- Use the async endpoint (
POST /v1/ai/async) for event-driven workflows - Or remove the
event_namefrom the workflow configuration - See Webhooks for event-driven workflow details
Provider errors (502)
JSON
1{2 "error": {3 "message": "Provider request failed",4 "details": {"status": 504, "message": "Upstream timeout"}5 },6 "meta": {7 "attempts": [8 {"provider": "openai", "model": "gpt-4o", "status": "error", "reason": "timeout"},9 {"provider": "anthropic", "model": "claude-3-5-sonnet", "status": "error", "reason": "rate_limited"}10 ]11 }12}Solutions:
- Check provider status pages for outages
- Add fallback providers to your workflow
- Implement client-side retry with exponential backoff
- Review the
attemptsarray to understand which providers were tried
Rate limiting (429)
JSON
1{2 "error": {3 "message": "Rate limit exceeded. Please retry after 30 seconds.",4 "type": "rate_limit_error",5 "code": "rate_limit_exceeded"6 }7}Solutions:
- Implement exponential backoff
- Check your plan's rate limits
- Contact support to increase limits if needed
Retry strategies
Exponential backoff (Python)
PYTHON
1import time2import random3from openai import OpenAI, RateLimitError, APIStatusError4 5client = OpenAI(6 base_url="https://api.modelriver.com/v1",7 api_key="mr_live_YOUR_API_KEY"8)9 10def make_request_with_retry(messages, max_retries=3):11 for attempt in range(max_retries):12 try:13 return client.chat.completions.create(14 model="my_workflow",15 messages=messages16 )17 except RateLimitError:18 if attempt < max_retries - 1:19 wait = (2 ** attempt) + random.random()20 print(f"Rate limited. Retrying in {wait:.1f}s...")21 time.sleep(wait)22 else:23 raise24 except APIStatusError as e:25 if e.status_code >= 500 and attempt < max_retries - 1:26 wait = (2 ** attempt) + random.random()27 print(f"Server error. Retrying in {wait:.1f}s...")28 time.sleep(wait)29 else:30 raiseExponential backoff (Node.js)
JAVASCRIPT
1import OpenAI from "openai";2 3const client = new OpenAI({4 baseURL: "https://api.modelriver.com/v1",5 apiKey: "mr_live_YOUR_API_KEY",6});7 8async function makeRequestWithRetry(messages, maxRetries = 3) {9 for (let attempt = 0; attempt < maxRetries; attempt++) {10 try {11 return await client.chat.completions.create({12 model: "my_workflow",13 messages,14 });15 } catch (error) {16 const isRetryable =17 error.status === 429 || error.status >= 500;18 19 if (isRetryable && attempt < maxRetries - 1) {20 const wait = Math.pow(2, attempt) * 1000 + Math.random() * 1000;21 console.log(`Retrying in ${(wait / 1000).toFixed(1)}s...`);22 await new Promise((r) => setTimeout(r, wait));23 } else {24 throw error;25 }26 }27 }28}Error handling best practices
- Always check
meta.attempts: Understand which providers were tried and why they failed - Implement exponential backoff: Essential for
429and5xxerrors - Add fallback providers: Configure backup providers in workflow settings to reduce errors
- Log error details: Store
error.detailsandmeta.attemptsfor post-mortem analysis - Handle gracefully in UI: Show meaningful messages to users, not raw error payloads
- Set client-side timeouts: Don't wait indefinitely for responses
- Monitor error rates: Use Observability to track error patterns
- Differentiate error types: Authentication errors need different handling than transient provider errors
Next steps
- Authentication: API key management
- Observability: Monitor errors in Request Logs
- Troubleshooting: Common issues and solutions