Reliable webhooks with event-driven workflows

Get notified when requests complete. Execute custom backend logic between AI generation and final response for complex workflows.

Webhooks enable your backend to receive real-time notifications when async AI requests complete. Event-driven workflows extend this further, allowing you to process AI-generated data with custom logic before sending the final response to your users.

Why use webhooks?

  • Async background processing – Don't keep client connections open; ModelRiver notifies your backend when requests finish
  • Reliable delivery – Automatic retries with exponential backoff ensure your backend receives notifications
  • Signed payloads – Verify authenticity with HMAC signatures
  • Full audit trail – All delivery attempts are logged with timestamps and statuses
  • Event-driven workflows – Execute custom business logic between AI generation and final response

Setting up webhooks

1. Create a webhook endpoint in your console

  1. Navigate to Webhooks in your project
  2. Click Create Webhook
  3. Enter your endpoint URL (e.g., https://api.yourapp.com/webhooks/ai)
  4. Optionally add a description
  5. Save and note the webhook ID

2. Configure your webhook signature secret

ModelRiver signs all webhook payloads so you can verify their authenticity:

  1. When creating a webhook, ModelRiver generates a signature secret
  2. Store this secret securely in your environment
  3. Use it to verify the mr-signature header on incoming requests

Standard webhooks

For workflows without an event_name, ModelRiver sends the complete AI response to your webhook endpoint immediately after processing.

Webhook payload structure

JSON
1{
2 "type": "task.completed",
3 "workflow": "customer-support-summary",
4 "status": "success",
5 "channel_id": "550e8400-e29b-41d4-a716-446655440000",
6 "data": {
7 "summary": "Customer requested refund for order #12345...",
8 "sentiment": "negative",
9 "category": "billing"
10 },
11 "meta": {
12 "provider": "openai",
13 "model": "gpt-4o",
14 "tokens": {
15 "prompt": 245,
16 "completion": 89,
17 "total": 334
18 },
19 "duration_ms": 2341,
20 "attempts": [
21 {
22 "provider": "openai",
23 "model": "gpt-4o",
24 "duration_ms": 2341,
25 "success": true
26 }
27 ]
28 },
29 "customer_data": {
30 "user_id": "user_789",
31 "session_id": "sess_abc123"
32 },
33 "timestamp": "2026-01-05T12:34:56.789Z"
34}

Headers sent with webhooks

HeaderDescription
mr-signatureHMAC-SHA256 signature of the payload
mr-timestampUnix timestamp when the webhook was sent
mr-channel-idUnique identifier for this request
content-typeAlways application/json

Verifying webhook signatures

Verify the mr-signature header to ensure the request came from ModelRiver:

Node.js (Express):

JAVASCRIPT
1const crypto = require('crypto');
2 
3function verifyWebhookSignature(payload, signature, secret) {
4 const expectedSignature = crypto
5 .createHmac('sha256', secret)
6 .update(JSON.stringify(payload))
7 .digest('hex');
8
9 return crypto.timingSafeEqual(
10 Buffer.from(signature),
11 Buffer.from(expectedSignature)
12 );
13}
14 
15app.post('/webhooks/ai', (req, res) => {
16 const signature = req.headers['mr-signature'];
17 const webhookSecret = process.env.MODELRIVER_WEBHOOK_SECRET;
18
19 if (!verifyWebhookSignature(req.body, signature, webhookSecret)) {
20 return res.status(401).json({ error: 'Invalid signature' });
21 }
22
23 // Process the webhook payload
24 const { type, data, customer_data } = req.body;
25
26 // Your business logic here
27 console.log('AI completed:', data);
28
29 res.status(200).json({ received: true });
30});

Python (Flask):

PYTHON
1import hmac
2import hashlib
3import json
4from flask import Flask, request, jsonify
5 
6app = Flask(__name__)
7 
8def verify_webhook_signature(payload, signature, secret):
9 expected_signature = hmac.new(
10 secret.encode('utf-8'),
11 json.dumps(payload).encode('utf-8'),
12 hashlib.sha256
13 ).hexdigest()
14
15 return hmac.compare_digest(signature, expected_signature)
16 
17@app.route('/webhooks/ai', methods=['POST'])
18def handle_webhook():
19 signature = request.headers.get('mr-signature')
20 webhook_secret = os.environ['MODELRIVER_WEBHOOK_SECRET']
21
22 if not verify_webhook_signature(request.json, signature, webhook_secret):
23 return jsonify({'error': 'Invalid signature'}), 401
24
25 # Process the webhook payload
26 payload = request.json
27 event_type = payload['type']
28 data = payload['data']
29
30 # Your business logic here
31 print(f'AI completed: {data}')
32
33 return jsonify({'received': True}), 200

Event-driven workflows

Event-driven workflows enable a three-step flow:

  1. AI generates – ModelRiver processes the AI request
  2. You process – Your backend executes custom logic (database updates, tool calls, validation)
  3. Final response – ModelRiver broadcasts the completed result to WebSocket channels

This is ideal for scenarios where you need to:

  • Execute tool/function calls based on AI output
  • Validate and enrich AI responses with database data
  • Implement multi-step workflows with approval gates
  • Trigger side effects (notifications, database updates) before returning to users

Setting up event-driven workflows

1. Add an event name to your workflow

When creating or editing a workflow in the console, set the Event name field:

Bash
curl -X POST https://api.modelriver.com/api/console/workflow \
-H "Authorization: Bearer mr_live_your_key" \
-H "Content-Type: application/json" \
-d '{
"name": "movie_suggestion",
"event_name": "new_movie_suggestion",
"provider": "openai",
"model": "gpt-4o",
"structured_output_id": "schema_abc123"
}'

Or via the dashboard:

  1. Open Workflows in your project
  2. Click Create Workflow or edit an existing workflow
  3. In the Event-Driven Workflow section, enter an event name (e.g., new_movie_suggestion)
  4. Save the workflow

2. Webhook payload for event-driven workflows

When a workflow with an event_name completes, ModelRiver sends a different payload structure:

JSON
1{
2 "type": "task.ai_generated",
3 "event": "new_movie_suggestion",
4 "channel_id": "550e8400-e29b-41d4-a716-446655440000",
5 "ai_response": {
6 "data": {
7 "title": "Inception",
8 "year": 2010,
9 "director": "Christopher Nolan",
10 "genre": ["Sci-Fi", "Thriller"],
11 "rating": 8.8
12 }
13 },
14 "callback_url": "https://api.modelriver.com/api/v1/callback/550e8400-e29b-41d4-a716-446655440000",
15 "callback_required": true,
16 "meta": {
17 "workflow_id": "wf_abc123",
18 "workflow_name": "movie_suggestion",
19 "project_id": "proj_xyz789",
20 "provider": "openai",
21 "model": "gpt-4o"
22 },
23 "customer_data": {
24 "user_id": "user_456",
25 "preferences": "action,scifi"
26 },
27 "timestamp": "2026-01-05T12:34:56.789Z"
28}

Key differences from standard webhooks:

  • type is task.ai_generated (not task.completed)
  • event contains your custom event name
  • ai_response wraps the AI-generated data
  • callback_url is provided for you to call back to ModelRiver
  • callback_required: true indicates ModelRiver is waiting for your callback

3. Process the AI response in your backend

Your webhook endpoint receives the AI response and can execute custom logic:

Node.js (Express) - Full Example:

JAVASCRIPT
1const express = require('express');
2const crypto = require('crypto');
3const axios = require('axios');
4 
5const app = express();
6app.use(express.json());
7 
8// Verify webhook signature
9function verifyWebhookSignature(payload, signature, secret) {
10 const expectedSignature = crypto
11 .createHmac('sha256', secret)
12 .update(JSON.stringify(payload))
13 .digest('hex');
14
15 return crypto.timingSafeEqual(
16 Buffer.from(signature),
17 Buffer.from(expectedSignature)
18 );
19}
20 
21// Handle event-driven webhook
22app.post('/webhooks/ai', async (req, res) => {
23 const signature = req.headers['mr-signature'];
24 const webhookSecret = process.env.MODELRIVER_WEBHOOK_SECRET;
25
26 // 1. Verify the webhook signature
27 if (!verifyWebhookSignature(req.body, signature, webhookSecret)) {
28 return res.status(401).json({ error: 'Invalid signature' });
29 }
30
31 const { type, event, ai_response, callback_url, customer_data } = req.body;
32
33 // 2. Check if this is an event-driven workflow
34 if (type === 'task.ai_generated' && callback_url) {
35 // Respond immediately to acknowledge receipt
36 res.status(200).json({ received: true });
37
38 // 3. Process the AI response asynchronously
39 processEventDrivenWorkflow(event, ai_response, callback_url, customer_data)
40 .catch(error => {
41 console.error('Error processing event-driven workflow:', error);
42
43 // Send error to ModelRiver
44 axios.post(callback_url, {
45 error: 'processing_failed',
46 message: error.message,
47 }, {
48 headers: {
49 'Authorization': `Bearer ${process.env.MODELRIVER_API_KEY}`,
50 'Content-Type': 'application/json'
51 }
52 });
53 });
54 } else {
55 // Standard webhook (no event name)
56 const { data } = req.body;
57 console.log('Standard webhook received:', data);
58 res.status(200).json({ received: true });
59 }
60});
61 
62async function processEventDrivenWorkflow(event, aiResponse, callbackUrl, customerData) {
63 console.log(`Processing event: ${event}`);
64
65 // Example: Add movie to database and generate recommendations
66 if (event === 'new_movie_suggestion') {
67 const movieData = aiResponse.data;
68
69 // 4. Execute your custom business logic
70 // - Save to database
71 const movie = await saveMovieToDatabase(movieData);
72
73 // - Generate recommendations based on user preferences
74 const recommendations = await generateRecommendations(
75 customerData.user_id,
76 movie.genre
77 );
78
79 // - Get streaming availability
80 const streamingOptions = await checkStreamingAvailability(movie.title);
81
82 // 5. Call back to ModelRiver with the enriched data
83 await axios.post(callbackUrl, {
84 data: {
85 // Original AI data
86 ...movieData,
87 // Your enriched data
88 id: movie.id,
89 database_id: movie.database_id,
90 recommendations: recommendations,
91 streaming: streamingOptions,
92 processed_at: new Date().toISOString()
93 },
94 task_id: `movie_${movie.id}`,
95 metadata: {
96 processing_time_ms: 234,
97 sources_checked: 3,
98 recommendations_count: recommendations.length
99 }
100 }, {
101 headers: {
102 'Authorization': `Bearer ${process.env.MODELRIVER_API_KEY}`,
103 'Content-Type': 'application/json'
104 }
105 });
106
107 console.log(`✅ Callback sent for movie ${movie.id}`);
108 }
109}
110 
111// Mock functions (implement your actual business logic)
112async function saveMovieToDatabase(movieData) {
113 // Your database logic here
114 return {
115 id: 'mov_123',
116 database_id: 456,
117 ...movieData
118 };
119}
120 
121async function generateRecommendations(userId, genres) {
122 // Your recommendation engine here
123 return [
124 { title: 'The Matrix', rating: 8.7 },
125 { title: 'Interstellar', rating: 8.6 }
126 ];
127}
128 
129async function checkStreamingAvailability(title) {
130 // Check streaming services API
131 return {
132 netflix: true,
133 prime: false,
134 hulu: false
135 };
136}
137 
138app.listen(3000, () => {
139 console.log('Webhook server running on port 3000');
140});

Python (Django) - Full Example:

PYTHON
1import hmac
2import hashlib
3import json
4import requests
5from django.http import JsonResponse
6from django.views.decorators.csrf import csrf_exempt
7from django.views.decorators.http import require_http_methods
8import os
9import asyncio
10from asgiref.sync import async_to_sync
11 
12def verify_webhook_signature(payload, signature, secret):
13 expected_signature = hmac.new(
14 secret.encode('utf-8'),
15 json.dumps(payload).encode('utf-8'),
16 hashlib.sha256
17 ).hexdigest()
18 return hmac.compare_digest(signature, expected_signature)
19 
20@csrf_exempt
21@require_http_methods(["POST"])
22def webhook_handler(request):
23 # 1. Verify the webhook signature
24 signature = request.headers.get('mr-signature')
25 webhook_secret = os.environ['MODELRIVER_WEBHOOK_SECRET']
26
27 try:
28 payload = json.loads(request.body)
29 except json.JSONDecodeError:
30 return JsonResponse({'error': 'Invalid JSON'}, status=400)
31
32 if not verify_webhook_signature(payload, signature, webhook_secret):
33 return JsonResponse({'error': 'Invalid signature'}, status=401)
34
35 event_type = payload.get('type')
36 callback_url = payload.get('callback_url')
37
38 # 2. Check if this is an event-driven workflow
39 if event_type == 'task.ai_generated' and callback_url:
40 # Respond immediately to acknowledge receipt
41 # Process asynchronously in background task
42 process_event_driven_workflow.delay(
43 event=payload.get('event'),
44 ai_response=payload.get('ai_response'),
45 callback_url=callback_url,
46 customer_data=payload.get('customer_data', {})
47 )
48 return JsonResponse({'received': True}, status=200)
49 else:
50 # Standard webhook (no event name)
51 data = payload.get('data', {})
52 print(f'Standard webhook received: {data}')
53 return JsonResponse({'received': True}, status=200)
54 
55# Celery task or async function
56def process_event_driven_workflow(event, ai_response, callback_url, customer_data):
57 """Process the AI response with custom business logic"""
58 print(f'Processing event: {event}')
59
60 if event == 'new_movie_suggestion':
61 movie_data = ai_response['data']
62
63 # 3. Execute your custom business logic
64 # - Save to database
65 movie = save_movie_to_database(movie_data)
66
67 # - Generate recommendations
68 recommendations = generate_recommendations(
69 customer_data.get('user_id'),
70 movie['genre']
71 )
72
73 # - Check streaming availability
74 streaming_options = check_streaming_availability(movie['title'])
75
76 # 4. Call back to ModelRiver with enriched data
77 try:
78 response = requests.post(
79 callback_url,
80 json={
81 'data': {
82 **movie_data,
83 'id': movie['id'],
84 'database_id': movie['database_id'],
85 'recommendations': recommendations,
86 'streaming': streaming_options,
87 'processed_at': datetime.now().isoformat()
88 },
89 'task_id': f"movie_{movie['id']}",
90 'metadata': {
91 'processing_time_ms': 234,
92 'sources_checked': 3,
93 'recommendations_count': len(recommendations)
94 }
95 },
96 headers={
97 'Authorization': f"Bearer {os.environ['MODELRIVER_API_KEY']}",
98 'Content-Type': 'application/json'
99 },
100 timeout=10
101 )
102 response.raise_for_status()
103 print(f'✅ Callback sent for movie {movie["id"]}')
104 except requests.exceptions.RequestException as e:
105 print(f'❌ Callback failed: {e}')
106 # Send error to ModelRiver
107 requests.post(
108 callback_url,
109 json={
110 'error': 'processing_failed',
111 'message': str(e)
112 },
113 headers={
114 'Authorization': f"Bearer {os.environ['MODELRIVER_API_KEY']}",
115 'Content-Type': 'application/json'
116 }
117 )
118 
119# Mock functions (implement your actual logic)
120def save_movie_to_database(movie_data):
121 # Your database logic
122 return {
123 'id': 'mov_123',
124 'database_id': 456,
125 **movie_data
126 }
127 
128def generate_recommendations(user_id, genres):
129 # Your recommendation engine
130 return [
131 {'title': 'The Matrix', 'rating': 8.7},
132 {'title': 'Interstellar', 'rating': 8.6}
133 ]
134 
135def check_streaming_availability(title):
136 # Check streaming services API
137 return {
138 'netflix': True,
139 'prime': False,
140 'hulu': False
141 }

4. Callback API specification

After processing the AI response, call back to ModelRiver with the final data:

Endpoint: POST {callback_url}
Headers:

  • Authorization: Bearer {your_api_key}
  • Content-Type: application/json

Success payload:

JSON
1{
2 "data": {
3 "title": "Inception",
4 "year": 2010,
5 "director": "Christopher Nolan",
6 "genre": ["Sci-Fi", "Thriller"],
7 "rating": 8.8,
8 "id": "mov_123",
9 "database_id": 456,
10 "recommendations": [
11 { "title": "The Matrix", "rating": 8.7 },
12 { "title": "Interstellar", "rating": 8.6 }
13 ],
14 "streaming": {
15 "netflix": true,
16 "prime": false
17 }
18 },
19 "task_id": "movie_123",
20 "metadata": {
21 "processing_time_ms": 234,
22 "sources_checked": 3
23 }
24}

Error payload:

JSON
1{
2 "error": "processing_failed",
3 "message": "Database connection timeout"
4}

Response: ModelRiver returns 200 OK on successful callback.

5. Frontend receives final response

ModelRiver broadcasts the final response (including your enriched data) to the WebSocket channel. Your frontend using the ModelRiver Client SDK receives:

JSON
1{
2 "status": "completed",
3 "data": {
4 "title": "Inception",
5 "year": 2010,
6 "director": "Christopher Nolan",
7 "genre": ["Sci-Fi", "Thriller"],
8 "rating": 8.8,
9 "id": "mov_123",
10 "database_id": 456,
11 "recommendations": [
12 { "title": "The Matrix", "rating": 8.7 },
13 { "title": "Interstellar", "rating": 8.6 }
14 ],
15 "streaming": {
16 "netflix": true,
17 "prime": false
18 }
19 },
20 "ai_response": {
21 "data": {
22 "title": "Inception",
23 "year": 2010,
24 "director": "Christopher Nolan",
25 "genre": ["Sci-Fi", "Thriller"],
26 "rating": 8.8
27 }
28 },
29 "customer_data": {
30 "user_id": "user_456",
31 "preferences": "action,scifi"
32 }
33}

Note: Both the enriched data (with your additions) and the original ai_response are available to the frontend.

Timeout handling

If your backend doesn't call back within 5 minutes, ModelRiver automatically:

  1. Sends a timeout error to the WebSocket channel
  2. Logs the timeout event
  3. Marks the request as failed

Testing event-driven workflows in the playground

The playground automatically simulates the complete event-driven flow when testing workflows with event_name set:

  1. AI generates the response
  2. "Simulating backend callback" message appears
  3. After ~1.5s delay, a simulated callback response is generated
  4. Final response is displayed with both original AI data and simulated enrichments

This helps you validate your workflow logic before implementing the actual webhook callback in production.

Webhook delivery and retries

Retry policy

ModelRiver implements exponential backoff with the following schedule:

AttemptDelay
1Immediate
25 seconds
330 seconds
42 minutes
510 minutes
630 minutes
71 hour
82 hours

After 8 failed attempts, the webhook is moved to the Dead Letter Queue (DLQ) for manual inspection.

Successful delivery

Your endpoint should return a 2xx status code (preferably 200 OK) to acknowledge successful receipt.

Failed delivery

Any non-2xx status code, network timeout, or connection error triggers a retry.

Monitoring webhooks

All webhook deliveries are logged in your project's Request Logs:

  • Timeline view shows each delivery attempt
  • Status indicators mark success/failure
  • Payload inspection lets you view the exact data sent
  • Callback logs (for event-driven workflows) show your backend's response

Filter logs by event_name to isolate specific event-driven workflows.

Security best practices

  1. Always verify signatures – Never process webhooks without validating the mr-signature header
  2. Use HTTPS endpoints – ModelRiver only sends webhooks to https:// URLs in production
  3. Implement idempotency – Use channel_id to deduplicate webhook deliveries
  4. Set reasonable timeouts – Respond to webhooks within 10 seconds; use background jobs for long-running tasks
  5. Rate limit – Protect your webhook endpoints from abuse or accidental loops
  6. Store secrets securely – Keep webhook signature secrets in environment variables, never in code

Common patterns

Pattern 1: Fire-and-forget notifications

Standard webhook (no event_name): Receive AI results and trigger side effects (send emails, update databases) without blocking the AI response.

Pattern 2: Tool/function calling workflows

Event-driven workflow with event_name: AI generates a plan, your backend executes tool calls, then you return the final result to the frontend.

Pattern 3: Approval workflows

Event-driven workflow: AI generates content, your backend routes it to an approval queue, human approves, then you call back with the approved content.

Pattern 4: Multi-stage processing

Event-driven workflow: Chain multiple processing steps (AI → validation → enrichment → formatting) before delivering to users.

Next steps