Async

Reliable webhooks with retries

Get notified when requests complete. Perfect for background processing without keeping connections open.

Signed payloads Backoff + retries Queue based Logged outcomes

Visual

Webhook delivery flow

Events enter a queue, retry with backoff, and land in DLQ if they fail.

01

Event created

Completion or status change triggers

02

Enqueued for delivery

Ordered, signed with secret

03

Delivery with retries

Backoff until 200 OK received

04

Delivery success

200 OK, signature verified

05

DLQ if failed

Inspect and replay failed events

06

Attempts + timing logged

Full audit trail in request logs

Delivery
webhook_url: "https://api.yourapp.com/hooks/ai"
signature_header: "mr-signature"
attempts:
  - status: 500, backoff_ms: 500
  - status: 200, backoff_ms: 0
payload: { workflow, status, data, meta }
              
1

Queue + retry

Webhook jobs retry with backoff until your endpoint confirms delivery.

2

Signed and verifiable

Each request includes signatures so you can verify authenticity.

3

Full audit trail

Successes and failures appear in analytics and request logs with timing.

Retry policy

Backoff

Automatic retries until acknowledged.

Security

Signed

Verify integrity with shared secret.

Monitoring

Logged

Inspect deliveries by status and time.

Scroll delivery flow

01 · Queue

Completion enqueued with payload and signature secret.

02 · Deliver

Send webhook with mr-signature header; await 2xx.

03 · Retry

Backoff and retry until acknowledged or budget exhausted.

04 · Log

Record attempts in analytics and request logs.

Use cases

  • Notify your app when async workflows finish.
  • Keep CRM, billing, or analytics in sync.
  • Power long-running jobs without keeping clients open.

What's unique

  • Signed payloads for integrity.
  • Retries with backoff and clear audit trail.
  • Works alongside streaming and async requests.

Event-Driven Workflows

For complex workflows that require backend processing after AI generation, use event-driven workflows. Set an event_name on your workflow to enable a three-step flow: AI generation → your backend processing → final response.

01

AI generates

ModelRiver processes the AI request and sends the result to your webhook with type: "task.ai_generated"

02

You process

Your backend executes custom logic (e.g., database updates, tool calls) and calls back to ModelRiver's callback endpoint

03

Final response

ModelRiver broadcasts the completed result to WebSocket channels, updating your frontend in real-time

  // Set event_name on workflow creation
  {
    "name": "movie_suggestion",
    "event_name": "new_movie_suggestion",
    ...
  }
  
  // Webhook payload includes callback_url
  {
    "type": "task.ai_generated",
    "event": "new_movie_suggestion",
    "ai_response": {...},
    "callback_url": "https://api.modelriver.com/v1/callback/{channel_id}",
    "callback_required": true
  }
  
  // Your backend calls back with execution results
  POST /api/v1/callback/{channel_id}
  Authorization: Bearer YOUR_API_KEY
  {
    "data": {...},
    "task_id": "abc123",
    "metadata": {...}
  }
            

Automatic timeout handling

If your backend doesn't call back within 5 minutes, ModelRiver automatically sends a timeout error to the WebSocket channel.

Testing Event-Driven Workflows

The playground automatically simulates the complete event-driven flow when testing workflows with event_name set, helping you understand and validate the flow before production.

Simulated Flow

  1. AI generates the response data
  2. Webhook event triggered (simulated)
  3. Backend processes data (simulated ~1.5s delay)
  4. Backend injects id field into AI's data
  5. Returns merged data + original ai_response

What You See

  • "AI generated" status with progress indicator
  • "Simulating backend callback" message
  • data with AI's fields + injected id
  • ai_response with original AI data

No backend setup required: The playground simulation helps you visualize and validate your workflow logic before implementing the actual webhook callback in your production environment.

Programmatic access

Use the async API with webhook callbacks

POST https://api.modelriver.com/v1/ai/async
Authorization: Bearer mr_live_your_key

{
  "workflow": "order-processor",
  "messages": [...]
}

// Your webhook receives the completed response:
{
  "channel_id": "550e8400-e29b-...",
  "status": "success",
  "data": { ... },
  "meta": { ... }
}

Configure webhook URLs in the console. Signed payloads with automatic retries and DLQ for failed deliveries.

Deliver even when clients disconnect

Use webhooks with streaming or async workflows to keep downstream systems up to date.