Track every webhook delivery

For async requests, webhooks notify your backend about completed AI responses. The timeline tracks every delivery attempt—successes, failures, and retries.

Overview

For async requests and event-driven workflows, ModelRiver notifies your backend about completed AI responses via webhooks. The timeline captures every delivery attempt, including the payload sent, your endpoint's response, delivery status, and retry information. This gives you complete visibility into whether your backend received the AI response.


How webhook deliveries appear

In the timeline

  • Position: After the main request
  • Badge colors:
    • Gray – Planned (queued, not yet sent)
    • Blue – Delivering (HTTP request in progress)
    • Green – Success (delivered successfully)
    • Red – Error (delivery failed)
  • Badge content: Webhook URL and status

When clicked

Clicking a webhook delivery reveals:

Status information

  • Delivery status – Current state of delivery (Planned, Delivering, Success, Error)
  • Callback status (event-driven workflows only):
    • Progress – Callback expected but not yet received
    • Success – Callback received
    • Error – Callback indicated error or timed out
  • Retry button – Available when delivery failed and retry is allowed

Delivery metadata

FieldDescription
Webhook URLThe endpoint that received (or should receive) the webhook
Delivery timeWhen the webhook was sent (relative time)
DurationHow long the HTTP request took
HTTP status codeResponse code from your endpoint
Error messageDetailed error if delivery failed

Request Data tab (webhook payload)

Shows the exact payload sent to your webhook endpoint:

  • Raw JSON view – Complete webhook payload
  • Preview (tree view) – Interactive JSON tree
  • Copy functionality – Copy payload for testing or debugging

Response tab (webhook response)

Shows the response from your webhook endpoint:

  • Raw view – Complete HTTP response body
  • Preview (tree view) – If response is JSON
  • Copy functionality – Copy response for analysis

Webhook delivery lifecycle

Status progression

Planned Delivering Success
Error (Retry) Delivering Success
Error

Detailed states

  1. Planned – Webhook is queued and scheduled for delivery. This is the initial state when the AI response is ready to be sent to your backend.

  2. Delivering – The HTTP request to your endpoint is in progress. ModelRiver is actively sending the webhook payload.

  3. Success – Your endpoint returned a 2xx status code. The delivery is complete and your backend has received the AI response.

  4. Error – Delivery failed. Common causes include:

    • Connection refused – Your endpoint is down or unreachable
    • Timeout – Your endpoint is too slow to respond
    • Non-2xx response – Your endpoint returned an error (4xx, 5xx)
    • Network error – DNS resolution failure, SSL issues, etc.

Retry functionality

Automatic retries

ModelRiver automatically retries failed webhook deliveries with exponential backoff.

Manual retries

You can manually retry from the Request Logs detail view when:

  • can_retry is true
  • The delivery failed (success is false)
  • At least 30 seconds have passed since the delivery was sent

Retry rules

RuleDetails
Maximum attempts3 total (original + 2 retries)
Time window5 minutes from first delivery attempt
Minimum delay30 seconds between attempts
No duplicatesOnce retried, original cannot be retried again

Why these rules exist

  • 3 attempts maximum – Prevents infinite loops and resource exhaustion
  • 5-minute window – Ensures retries happen while data is still relevant; stale deliveries could cause confusion
  • 30-second delay – Prevents retry storms that could overwhelm your endpoint
  • No duplicates – Each retry creates a new delivery record, preventing duplicate confusion

How to retry manually

  1. Navigate to the Request Log detail view
  2. Click the webhook delivery in the timeline
  3. If the Retry button is visible, click it
  4. A new delivery attempt is created and queued
  5. Refresh the page to see the new delivery status

Troubleshooting webhook deliveries

Connection refused

Symptom: Error shows "connection refused" or "ECONNREFUSED"

Cause: Your webhook endpoint is not running or not accessible from ModelRiver's servers.

Resolution:

  • Verify your endpoint is running and accessible
  • Check firewall rules allow inbound connections from ModelRiver
  • If using a development endpoint, ensure it's publicly accessible (use ModelRiver CLI for local testing)

Timeout

Symptom: Error shows "timeout" or delivery duration is very long

Cause: Your endpoint is taking too long to process the webhook.

Resolution:

  • Optimize your webhook handler to respond quickly (< 5 seconds)
  • Move heavy processing to a background job and return 200 immediately
  • Increase server resources if the endpoint is under load

Non-2xx response

Symptom: HTTP status code is 4xx or 5xx

Cause: Your endpoint is returning an error.

Resolution:

  • Check your endpoint logs for error details
  • Verify the webhook payload format matches your handler's expectations
  • Ensure authentication (if any) is configured correctly
  • Test your endpoint with the copied payload using curl or Postman

SSL/TLS errors

Symptom: Error referencing SSL, TLS, or certificate issues

Cause: SSL certificate problem on your endpoint.

Resolution:

  • Verify your SSL certificate is valid and not expired
  • Ensure your endpoint uses a trusted CA certificate
  • Check if intermediate certificates are properly configured

Inspecting webhook payloads

Why payload inspection matters

Understanding the webhook payload helps you:

  • Verify format – Ensure your handler expects the correct structure
  • Debug processing errors – Identify mismatches between payload and handler logic
  • Test endpoints – Copy real payloads to test your endpoint locally
  • Understand data flow – See exactly what data your backend receives

Key payload fields

For standard async webhooks:

  • type – Event type (e.g., "task.completed", "task.ai_generated")
  • data – The AI response data
  • channel_id – Unique identifier for this async request

For event-driven workflows:

  • callback_url – URL your backend should call with processed data
  • Additional fields specific to the event-driven flow

Next steps