Ensure reliable webhook delivery

Track every webhook delivery attempt, diagnose failures, and use retry functionality to ensure your backend receives every notification from async AI requests.

Overview

For async requests and event-driven workflows, webhook delivery is the critical bridge between ModelRiver and your backend. A failed webhook means your application won't know the AI response is ready. Request Logs track every delivery attempt in detail, so you can monitor reliability and fix issues fast.


Webhook delivery lifecycle

Every webhook delivery goes through these states:

Planned Delivering Success
Error Retry (auto/manual) Success/Error

Status meanings

StatusVisualMeaning
PlannedGray badgeWebhook queued, waiting to be sent
DeliveringBlue badgeHTTP request in progress
SuccessGreen badgeYour endpoint returned 2xx
ErrorRed badgeDelivery failed — action needed

Step-by-step webhook monitoring

1. Review delivery status

  1. Navigate to Request Logs and filter to Live mode
  2. Look for async requests (those with webhook deliveries in the timeline)
  3. Check the webhook delivery status badges:
    • Green ✓ — Delivered successfully
    • Red ✗ — Failed — requires attention

2. Inspect failed deliveries

Click a failed webhook delivery to see:

Webhook Delivery Details
Status: Error
URL: https://api.yourapp.com/webhooks/mr
Duration: 30,012ms
HTTP: timeout
Error: Request timeout after 30s
Sent at: 2 minutes ago
Can retry: Yes

3. Analyze the payload

Click the Request Data tab to see exactly what was sent:

JSON
1{
2 "type": "task.completed",
3 "data": {
4 "channel_id": "ch_abc123",
5 "request_id": "req_xyz789",
6 "model": "gpt-4o",
7 "provider": "openai",
8 "response": {
9 "choices": [
10 {
11 "message": {
12 "role": "assistant",
13 "content": "Here is the AI response..."
14 }
15 }
16 ],
17 "usage": {
18 "prompt_tokens": 250,
19 "completion_tokens": 180
20 }
21 }
22 }
23}

For event-driven workflows, the payload includes a callback_url:

JSON
1{
2 "type": "task.ai_generated",
3 "data": {
4 "channel_id": "ch_abc123",
5 "callback_url": "https://api.modelriver.com/callbacks/cb_def456",
6 "response": { ... }
7 }
8}

4. Check your endpoint's response

Click the Response tab to see what your endpoint returned:

Successful response:

JSON
1{
2 "status": "ok",
3 "processed": true
4}

Error response (your endpoint had an issue):

JSON
1{
2 "error": "database_connection_failed",
3 "message": "Unable to connect to PostgreSQL"
4}

5. Retry failed deliveries

When a delivery fails and can_retry is true:

  1. Fix the issue with your endpoint (if applicable)
  2. Click the Retry button in the delivery details
  3. A new delivery attempt is created and queued
  4. Monitor the new attempt in the timeline

Retry rules:

  • Maximum 3 attempts total (original + 2 retries)
  • Must be within 5 minutes of the first delivery
  • Must wait at least 30 seconds between retries
  • Each retry creates a new delivery record

Common webhook issues and solutions

Connection refused

Error: connect ECONNREFUSED 10.0.0.1:443

Causes:

  • Your server is down or unreachable
  • Firewall blocking ModelRiver's IP addresses
  • Wrong port in webhook URL

Fix: Verify your server is running, check firewall rules, and confirm the webhook URL is correct.

Timeout

Error: Request timeout after 30000ms

Causes:

  • Your endpoint is too slow to respond
  • Heavy processing blocking the response
  • Database queries taking too long

Fix: Acknowledge the webhook immediately (return 200), then process asynchronously:

JAVASCRIPT
1// Good: Respond immediately, process in background
2app.post('/webhooks/modelriver', async (req, res) => {
3 res.status(200).json({ status: 'accepted' });
4 
5 // Process in background
6 processWebhook(req.body).catch(console.error);
7});
8 
9// Bad: Process before responding
10app.post('/webhooks/modelriver', async (req, res) => {
11 await heavyProcessing(req.body); // This may timeout!
12 res.status(200).json({ status: 'done' });
13});

SSL certificate errors

Error: UNABLE_TO_VERIFY_LEAF_SIGNATURE

Fix: Ensure your SSL certificate is valid, properly chained, and not self-signed (or add your CA to the trust store).

Non-2xx response

HTTP Status: 500 Internal Server Error

Fix: Check your server logs for the error. Common causes include:

  • Missing webhook handler route
  • JSON parsing errors
  • Application exceptions during processing

Monitoring callback status

For event-driven workflows, also monitor callback status:

Timeline for event-driven request:
AI generation success 1,200ms
Webhook delivery success 45ms
? Backend callback progress (waiting...)

If callback times out (5 minutes):

AI generation success 1,200ms
Webhook delivery success 45ms
Backend callback timeout 300,000ms

Common callback issues:

  • Backend doesn't extract and use the callback_url from the webhook payload
  • Backend takes too long to process (> 5 minute timeout)
  • Backend sends callback data in wrong format
  • Network issues between your backend and ModelRiver

Next steps