Overview
The main request is the central component of the timeline—it represents the AI provider call that ultimately returned the response to your application. If the fallback chain was invoked, the main request is the attempt that finally succeeded (or the last attempt that failed if all providers were exhausted).
How the main request appears
In the timeline
- Position: After any failover attempts, before webhook deliveries
- Badge color: Green for success, red for error
- Badge content: Provider icon, provider name, and model name
- Additional info: Duration, token usage, and timestamp
When clicked
Clicking the main request reveals comprehensive information:
Header information
| Field | Description |
|---|---|
| Provider icon and name | Visual identification of the AI provider (e.g., OpenAI, Anthropic) |
| Model name | Specific model used (e.g., gpt-4o-mini, claude-3-5-sonnet-20241022) |
| Status | Success (green), Failed (red), or Error (red) |
| Duration | Request latency in milliseconds |
| Timestamp | When the request occurred (relative time) |
Request Body tab
The Request Body tab shows exactly what was sent to the AI provider:
-
Raw JSON view – Complete JSON payload in formatted, syntax-highlighted code editor
- See the exact structure and content sent to the provider
- Useful for debugging failed requests, verifying prompt content, or understanding request formatting
-
Preview (tree view) – Interactive JSON tree viewer for easier navigation
- Explore large payloads more easily, collapse/expand sections, and focus on specific fields
- Great for inspecting complex nested structures or large message arrays
-
Copy functionality – One-click copy of the entire request body
- Quickly share request details with team members or use in API testing tools
Common fields in the request body:
model– The model identifier sent to the providermessages– The conversation messages (system, user, assistant)temperature– Randomness control parametermax_tokens– Maximum tokens to generateresponse_format– Structured output configuration (if applicable)
Response Body tab
The Response Body tab shows the complete response from the AI provider:
-
Raw JSON view – Full provider response in formatted code editor
- See exactly what the provider returned, including all metadata
- Useful for analyzing response quality, debugging parsing issues, or verifying structured output compliance
-
Preview (tree view) – Interactive JSON tree for response exploration
- Navigate large responses, focus on specific fields, and understand response structure
-
Copy functionality – Copy the entire response for analysis or sharing
Common fields in the response body:
id– Provider's unique response identifierchoices– The generated content (text or structured output)usage– Token consumption breakdownprompt_tokens– Input tokens consumedcompletion_tokens– Output tokens generatedtotal_tokens– Sum of input and output
model– The actual model that processed the request (may differ from requested)finish_reason– Why the generation stopped (stop,length,content_filter)
Understanding the main request
Success vs. failure
Successful main request (green badge):
- Provider returned a valid response
- Token usage is recorded
- Response body contains the AI-generated content
- This is the response delivered to your application
Failed main request (red badge):
- All providers in the fallback chain failed
- The main request represents the last attempted provider
- Response body contains the error from the final attempt
- Your application received an error response
When there are failover attempts
If the timeline shows failover attempts before the main request:
- The main request used a downstream fallback provider that succeeded
- The failed providers' errors are visible in their respective timeline items
- The main request's provider/model may be different from your workflow's primary provider
Token usage on the main request
Token usage is captured on the main request:
- Prompt tokens: Tokens in the request to the provider
- Completion tokens: Tokens in the provider's response
- Total tokens: Sum of prompt and completion tokens
Note: Failed failover attempts may also consume tokens (depending on the provider and error type). The main request's tokens only reflect this specific successful call.
Inspecting the main request for debugging
Verifying request content
Open the Request Body tab to verify:
- System prompt – Is it correct and complete?
- User message – Does it contain the expected input?
- Model – Is the intended model being used?
- Parameters – Are
temperature,max_tokens, etc. set correctly? - Structured output schema – If applicable, is the schema correct?
Verifying response content
Open the Response Body tab to verify:
- Generated content – Does the response match expectations?
- Finish reason – Did the response complete naturally (
stop) or hit a limit (length)? - Token usage – Are token counts in expected range?
- Structured output – If applicable, does the response match the schema?
Reproducing issues
Use the copy functionality to:
- Copy the request body
- Test it directly against the provider's API
- Compare responses to identify ModelRiver-specific vs. provider-specific issues
Next steps
- Provider Failover – Understanding failed attempts before the main request
- Webhook Deliveries – What happens after the main request (async)
- Backend Callbacks – Event-driven workflow responses
- Debugging – Complete debugging guide
- Back to Timeline – Timeline overview