Webhooks enable your backend to receive real-time notifications when async AI requests complete. Event-driven workflows extend this further, allowing you to process AI-generated data with custom logic before sending the final response to your users.
Why use webhooks?
- Async background processing – Don't keep client connections open; ModelRiver notifies your backend when requests finish
- Reliable delivery – Automatic retries with exponential backoff ensure your backend receives notifications
- Signed payloads – Verify authenticity with HMAC signatures
- Full audit trail – All delivery attempts are logged with timestamps and statuses
- Event-driven workflows – Execute custom business logic between AI generation and final response
Setting up webhooks
1. Create a webhook endpoint in your console
- Navigate to Webhooks in your project
- Click Create Webhook
- Enter your endpoint URL (e.g.,
https://api.yourapp.com/webhooks/ai) - Optionally add a description
- Save and note the webhook ID
2. Configure your webhook signature secret
ModelRiver signs all webhook payloads so you can verify their authenticity:
- When creating a webhook, ModelRiver generates a signature secret
- Store this secret securely in your environment
- Use it to verify the
mr-signatureheader on incoming requests
Standard webhooks
For workflows without an event_name, ModelRiver sends the complete AI response to your webhook endpoint immediately after processing.
Webhook payload structure
1{2 "type": "task.completed",3 "workflow": "customer-support-summary",4 "status": "success",5 "channel_id": "550e8400-e29b-41d4-a716-446655440000",6 "data": {7 "summary": "Customer requested refund for order #12345...",8 "sentiment": "negative",9 "category": "billing"10 },11 "meta": {12 "provider": "openai",13 "model": "gpt-4o",14 "tokens": {15 "prompt": 245,16 "completion": 89,17 "total": 33418 },19 "duration_ms": 2341,20 "attempts": [21 {22 "provider": "openai",23 "model": "gpt-4o",24 "duration_ms": 2341,25 "success": true26 }27 ]28 },29 "customer_data": {30 "user_id": "user_789",31 "session_id": "sess_abc123"32 },33 "timestamp": "2026-01-05T12:34:56.789Z"34}Headers sent with webhooks
| Header | Description |
|---|---|
mr-signature | HMAC-SHA256 signature of the payload |
mr-timestamp | Unix timestamp when the webhook was sent |
mr-channel-id | Unique identifier for this request |
content-type | Always application/json |
Verifying webhook signatures
Verify the mr-signature header to ensure the request came from ModelRiver:
Node.js (Express):
1const crypto = require('crypto');2 3function verifyWebhookSignature(payload, signature, secret) {4 const expectedSignature = crypto5 .createHmac('sha256', secret)6 .update(JSON.stringify(payload))7 .digest('hex');8 9 return crypto.timingSafeEqual(10 Buffer.from(signature),11 Buffer.from(expectedSignature)12 );13}14 15app.post('/webhooks/ai', (req, res) => {16 const signature = req.headers['mr-signature'];17 const webhookSecret = process.env.MODELRIVER_WEBHOOK_SECRET;18 19 if (!verifyWebhookSignature(req.body, signature, webhookSecret)) {20 return res.status(401).json({ error: 'Invalid signature' });21 }22 23 // Process the webhook payload24 const { type, data, customer_data } = req.body;25 26 // Your business logic here27 console.log('AI completed:', data);28 29 res.status(200).json({ received: true });30});Python (Flask):
1import hmac2import hashlib3import json4from flask import Flask, request, jsonify5 6app = Flask(__name__)7 8def verify_webhook_signature(payload, signature, secret):9 expected_signature = hmac.new(10 secret.encode('utf-8'),11 json.dumps(payload).encode('utf-8'),12 hashlib.sha25613 ).hexdigest()14 15 return hmac.compare_digest(signature, expected_signature)16 17@app.route('/webhooks/ai', methods=['POST'])18def handle_webhook():19 signature = request.headers.get('mr-signature')20 webhook_secret = os.environ['MODELRIVER_WEBHOOK_SECRET']21 22 if not verify_webhook_signature(request.json, signature, webhook_secret):23 return jsonify({'error': 'Invalid signature'}), 40124 25 # Process the webhook payload26 payload = request.json27 event_type = payload['type']28 data = payload['data']29 30 # Your business logic here31 print(f'AI completed: {data}')32 33 return jsonify({'received': True}), 200Event-driven workflows
Event-driven workflows enable a three-step flow:
- AI generates – ModelRiver processes the AI request
- You process – Your backend executes custom logic (database updates, tool calls, validation)
- Final response – ModelRiver broadcasts the completed result to WebSocket channels
This is ideal for scenarios where you need to:
- Execute tool/function calls based on AI output
- Validate and enrich AI responses with database data
- Implement multi-step workflows with approval gates
- Trigger side effects (notifications, database updates) before returning to users
Setting up event-driven workflows
1. Add an event name to your workflow
When creating or editing a workflow in the console, set the Event name field:
curl -X POST https://api.modelriver.com/api/console/workflow \ -H "Authorization: Bearer mr_live_your_key" \ -H "Content-Type: application/json" \ -d '{ "name": "movie_suggestion", "event_name": "new_movie_suggestion", "provider": "openai", "model": "gpt-4o", "structured_output_id": "schema_abc123" }'Or via the dashboard:
- Open Workflows in your project
- Click Create Workflow or edit an existing workflow
- In the Event-Driven Workflow section, enter an event name (e.g.,
new_movie_suggestion) - Save the workflow
2. Webhook payload for event-driven workflows
When a workflow with an event_name completes, ModelRiver sends a different payload structure:
1{2 "type": "task.ai_generated",3 "event": "new_movie_suggestion",4 "channel_id": "550e8400-e29b-41d4-a716-446655440000",5 "ai_response": {6 "data": {7 "title": "Inception",8 "year": 2010,9 "director": "Christopher Nolan",10 "genre": ["Sci-Fi", "Thriller"],11 "rating": 8.812 }13 },14 "callback_url": "https://api.modelriver.com/api/v1/callback/550e8400-e29b-41d4-a716-446655440000",15 "callback_required": true,16 "meta": {17 "workflow_id": "wf_abc123",18 "workflow_name": "movie_suggestion",19 "project_id": "proj_xyz789",20 "provider": "openai",21 "model": "gpt-4o"22 },23 "customer_data": {24 "user_id": "user_456",25 "preferences": "action,scifi"26 },27 "timestamp": "2026-01-05T12:34:56.789Z"28}Key differences from standard webhooks:
typeistask.ai_generated(nottask.completed)eventcontains your custom event nameai_responsewraps the AI-generated datacallback_urlis provided for you to call back to ModelRivercallback_required: trueindicates ModelRiver is waiting for your callback
3. Process the AI response in your backend
Your webhook endpoint receives the AI response and can execute custom logic:
Node.js (Express) - Full Example:
1const express = require('express');2const crypto = require('crypto');3const axios = require('axios');4 5const app = express();6app.use(express.json());7 8// Verify webhook signature9function verifyWebhookSignature(payload, signature, secret) {10 const expectedSignature = crypto11 .createHmac('sha256', secret)12 .update(JSON.stringify(payload))13 .digest('hex');14 15 return crypto.timingSafeEqual(16 Buffer.from(signature),17 Buffer.from(expectedSignature)18 );19}20 21// Handle event-driven webhook22app.post('/webhooks/ai', async (req, res) => {23 const signature = req.headers['mr-signature'];24 const webhookSecret = process.env.MODELRIVER_WEBHOOK_SECRET;25 26 // 1. Verify the webhook signature27 if (!verifyWebhookSignature(req.body, signature, webhookSecret)) {28 return res.status(401).json({ error: 'Invalid signature' });29 }30 31 const { type, event, ai_response, callback_url, customer_data } = req.body;32 33 // 2. Check if this is an event-driven workflow34 if (type === 'task.ai_generated' && callback_url) {35 // Respond immediately to acknowledge receipt36 res.status(200).json({ received: true });37 38 // 3. Process the AI response asynchronously39 processEventDrivenWorkflow(event, ai_response, callback_url, customer_data)40 .catch(error => {41 console.error('Error processing event-driven workflow:', error);42 43 // Send error to ModelRiver44 axios.post(callback_url, {45 error: 'processing_failed',46 message: error.message,47 }, {48 headers: {49 'Authorization': `Bearer ${process.env.MODELRIVER_API_KEY}`,50 'Content-Type': 'application/json'51 }52 });53 });54 } else {55 // Standard webhook (no event name)56 const { data } = req.body;57 console.log('Standard webhook received:', data);58 res.status(200).json({ received: true });59 }60});61 62async function processEventDrivenWorkflow(event, aiResponse, callbackUrl, customerData) {63 console.log(`Processing event: ${event}`);64 65 // Example: Add movie to database and generate recommendations66 if (event === 'new_movie_suggestion') {67 const movieData = aiResponse.data;68 69 // 4. Execute your custom business logic70 // - Save to database71 const movie = await saveMovieToDatabase(movieData);72 73 // - Generate recommendations based on user preferences74 const recommendations = await generateRecommendations(75 customerData.user_id,76 movie.genre77 );78 79 // - Get streaming availability80 const streamingOptions = await checkStreamingAvailability(movie.title);81 82 // 5. Call back to ModelRiver with the enriched data83 await axios.post(callbackUrl, {84 data: {85 // Original AI data86 ...movieData,87 // Your enriched data88 id: movie.id,89 database_id: movie.database_id,90 recommendations: recommendations,91 streaming: streamingOptions,92 processed_at: new Date().toISOString()93 },94 task_id: `movie_${movie.id}`,95 metadata: {96 processing_time_ms: 234,97 sources_checked: 3,98 recommendations_count: recommendations.length99 }100 }, {101 headers: {102 'Authorization': `Bearer ${process.env.MODELRIVER_API_KEY}`,103 'Content-Type': 'application/json'104 }105 });106 107 console.log(`✅ Callback sent for movie ${movie.id}`);108 }109}110 111// Mock functions (implement your actual business logic)112async function saveMovieToDatabase(movieData) {113 // Your database logic here114 return {115 id: 'mov_123',116 database_id: 456,117 ...movieData118 };119}120 121async function generateRecommendations(userId, genres) {122 // Your recommendation engine here123 return [124 { title: 'The Matrix', rating: 8.7 },125 { title: 'Interstellar', rating: 8.6 }126 ];127}128 129async function checkStreamingAvailability(title) {130 // Check streaming services API131 return {132 netflix: true,133 prime: false,134 hulu: false135 };136}137 138app.listen(3000, () => {139 console.log('Webhook server running on port 3000');140});Python (Django) - Full Example:
1import hmac2import hashlib3import json4import requests5from django.http import JsonResponse6from django.views.decorators.csrf import csrf_exempt7from django.views.decorators.http import require_http_methods8import os9import asyncio10from asgiref.sync import async_to_sync11 12def verify_webhook_signature(payload, signature, secret):13 expected_signature = hmac.new(14 secret.encode('utf-8'),15 json.dumps(payload).encode('utf-8'),16 hashlib.sha25617 ).hexdigest()18 return hmac.compare_digest(signature, expected_signature)19 20@csrf_exempt21@require_http_methods(["POST"])22def webhook_handler(request):23 # 1. Verify the webhook signature24 signature = request.headers.get('mr-signature')25 webhook_secret = os.environ['MODELRIVER_WEBHOOK_SECRET']26 27 try:28 payload = json.loads(request.body)29 except json.JSONDecodeError:30 return JsonResponse({'error': 'Invalid JSON'}, status=400)31 32 if not verify_webhook_signature(payload, signature, webhook_secret):33 return JsonResponse({'error': 'Invalid signature'}, status=401)34 35 event_type = payload.get('type')36 callback_url = payload.get('callback_url')37 38 # 2. Check if this is an event-driven workflow39 if event_type == 'task.ai_generated' and callback_url:40 # Respond immediately to acknowledge receipt41 # Process asynchronously in background task42 process_event_driven_workflow.delay(43 event=payload.get('event'),44 ai_response=payload.get('ai_response'),45 callback_url=callback_url,46 customer_data=payload.get('customer_data', {})47 )48 return JsonResponse({'received': True}, status=200)49 else:50 # Standard webhook (no event name)51 data = payload.get('data', {})52 print(f'Standard webhook received: {data}')53 return JsonResponse({'received': True}, status=200)54 55# Celery task or async function56def process_event_driven_workflow(event, ai_response, callback_url, customer_data):57 """Process the AI response with custom business logic"""58 print(f'Processing event: {event}')59 60 if event == 'new_movie_suggestion':61 movie_data = ai_response['data']62 63 # 3. Execute your custom business logic64 # - Save to database65 movie = save_movie_to_database(movie_data)66 67 # - Generate recommendations68 recommendations = generate_recommendations(69 customer_data.get('user_id'),70 movie['genre']71 )72 73 # - Check streaming availability74 streaming_options = check_streaming_availability(movie['title'])75 76 # 4. Call back to ModelRiver with enriched data77 try:78 response = requests.post(79 callback_url,80 json={81 'data': {82 **movie_data,83 'id': movie['id'],84 'database_id': movie['database_id'],85 'recommendations': recommendations,86 'streaming': streaming_options,87 'processed_at': datetime.now().isoformat()88 },89 'task_id': f"movie_{movie['id']}",90 'metadata': {91 'processing_time_ms': 234,92 'sources_checked': 3,93 'recommendations_count': len(recommendations)94 }95 },96 headers={97 'Authorization': f"Bearer {os.environ['MODELRIVER_API_KEY']}",98 'Content-Type': 'application/json'99 },100 timeout=10101 )102 response.raise_for_status()103 print(f'✅ Callback sent for movie {movie["id"]}')104 except requests.exceptions.RequestException as e:105 print(f'❌ Callback failed: {e}')106 # Send error to ModelRiver107 requests.post(108 callback_url,109 json={110 'error': 'processing_failed',111 'message': str(e)112 },113 headers={114 'Authorization': f"Bearer {os.environ['MODELRIVER_API_KEY']}",115 'Content-Type': 'application/json'116 }117 )118 119# Mock functions (implement your actual logic)120def save_movie_to_database(movie_data):121 # Your database logic122 return {123 'id': 'mov_123',124 'database_id': 456,125 **movie_data126 }127 128def generate_recommendations(user_id, genres):129 # Your recommendation engine130 return [131 {'title': 'The Matrix', 'rating': 8.7},132 {'title': 'Interstellar', 'rating': 8.6}133 ]134 135def check_streaming_availability(title):136 # Check streaming services API137 return {138 'netflix': True,139 'prime': False,140 'hulu': False141 }4. Callback API specification
After processing the AI response, call back to ModelRiver with the final data:
Endpoint: POST {callback_url}
Headers:
Authorization: Bearer {your_api_key}Content-Type: application/json
Success payload:
1{2 "data": {3 "title": "Inception",4 "year": 2010,5 "director": "Christopher Nolan",6 "genre": ["Sci-Fi", "Thriller"],7 "rating": 8.8,8 "id": "mov_123",9 "database_id": 456,10 "recommendations": [11 { "title": "The Matrix", "rating": 8.7 },12 { "title": "Interstellar", "rating": 8.6 }13 ],14 "streaming": {15 "netflix": true,16 "prime": false17 }18 },19 "task_id": "movie_123",20 "metadata": {21 "processing_time_ms": 234,22 "sources_checked": 323 }24}Error payload:
1{2 "error": "processing_failed",3 "message": "Database connection timeout"4}Response: ModelRiver returns 200 OK on successful callback.
5. Frontend receives final response
ModelRiver broadcasts the final response (including your enriched data) to the WebSocket channel. Your frontend using the ModelRiver Client SDK receives:
1{2 "status": "completed",3 "data": {4 "title": "Inception",5 "year": 2010,6 "director": "Christopher Nolan",7 "genre": ["Sci-Fi", "Thriller"],8 "rating": 8.8,9 "id": "mov_123",10 "database_id": 456,11 "recommendations": [12 { "title": "The Matrix", "rating": 8.7 },13 { "title": "Interstellar", "rating": 8.6 }14 ],15 "streaming": {16 "netflix": true,17 "prime": false18 }19 },20 "ai_response": {21 "data": {22 "title": "Inception",23 "year": 2010,24 "director": "Christopher Nolan",25 "genre": ["Sci-Fi", "Thriller"],26 "rating": 8.827 }28 },29 "customer_data": {30 "user_id": "user_456",31 "preferences": "action,scifi"32 }33}Note: Both the enriched data (with your additions) and the original ai_response are available to the frontend.
Timeout handling
If your backend doesn't call back within 5 minutes, ModelRiver automatically:
- Sends a timeout error to the WebSocket channel
- Logs the timeout event
- Marks the request as failed
Testing event-driven workflows in the playground
The playground automatically simulates the complete event-driven flow when testing workflows with event_name set:
- AI generates the response
- "Simulating backend callback" message appears
- After ~1.5s delay, a simulated callback response is generated
- Final response is displayed with both original AI data and simulated enrichments
This helps you validate your workflow logic before implementing the actual webhook callback in production.
Webhook delivery and retries
Retry policy
ModelRiver implements exponential backoff with the following schedule:
| Attempt | Delay |
|---|---|
| 1 | Immediate |
| 2 | 5 seconds |
| 3 | 30 seconds |
| 4 | 2 minutes |
| 5 | 10 minutes |
| 6 | 30 minutes |
| 7 | 1 hour |
| 8 | 2 hours |
After 8 failed attempts, the webhook is moved to the Dead Letter Queue (DLQ) for manual inspection.
Successful delivery
Your endpoint should return a 2xx status code (preferably 200 OK) to acknowledge successful receipt.
Failed delivery
Any non-2xx status code, network timeout, or connection error triggers a retry.
Monitoring webhooks
All webhook deliveries are logged in your project's Request Logs:
- Timeline view shows each delivery attempt
- Status indicators mark success/failure
- Payload inspection lets you view the exact data sent
- Callback logs (for event-driven workflows) show your backend's response
Filter logs by event_name to isolate specific event-driven workflows.
Security best practices
- Always verify signatures – Never process webhooks without validating the
mr-signatureheader - Use HTTPS endpoints – ModelRiver only sends webhooks to
https://URLs in production - Implement idempotency – Use
channel_idto deduplicate webhook deliveries - Set reasonable timeouts – Respond to webhooks within 10 seconds; use background jobs for long-running tasks
- Rate limit – Protect your webhook endpoints from abuse or accidental loops
- Store secrets securely – Keep webhook signature secrets in environment variables, never in code
Common patterns
Pattern 1: Fire-and-forget notifications
Standard webhook (no event_name): Receive AI results and trigger side effects (send emails, update databases) without blocking the AI response.
Pattern 2: Tool/function calling workflows
Event-driven workflow with event_name: AI generates a plan, your backend executes tool calls, then you return the final result to the frontend.
Pattern 3: Approval workflows
Event-driven workflow: AI generates content, your backend routes it to an approval queue, human approves, then you call back with the approved content.
Pattern 4: Multi-stage processing
Event-driven workflow: Chain multiple processing steps (AI → validation → enrichment → formatting) before delivering to users.
Next steps
- Review Workflows to understand how to configure event-driven settings
- Check Client SDK for frontend WebSocket integration
- Explore API Integration for async request patterns
- See Observability to monitor webhook deliveries