Webhooks enable your backend to receive real-time notifications when async AI requests complete. Event-driven workflows extend this further, allowing you to process AI-generated data with custom logic before sending the final response to your users.
Why use webhooks?
- Async background processing – Don't keep client connections open; ModelRiver notifies your backend when requests finish
- Reliable delivery – Automatic retries with exponential backoff ensure your backend receives notifications
- Signed payloads – Verify authenticity with HMAC signatures
- Full audit trail – All delivery attempts are logged with timestamps and statuses
- Event-driven workflows – Execute custom business logic between AI generation and final response
Setting up webhooks
1. Create a webhook endpoint in your console
- Navigate to Webhooks in your project
- Click Create Webhook
- Enter your endpoint URL (e.g.,
https://api.yourapp.com/webhooks/ai) - Optionally add a description
- Save and note the webhook ID
2. Configure your webhook signature secret
ModelRiver signs all webhook payloads so you can verify their authenticity:
- When creating a webhook, ModelRiver generates a signature secret
- Store this secret securely in your environment
- Use it to verify the
mr-signatureheader on incoming requests
Standard webhooks
For workflows without an event_name, ModelRiver sends the complete AI response to your webhook endpoint immediately after processing.
Webhook payload structure
{
"type": "task.completed",
"workflow": "customer-support-summary",
"status": "success",
"channel_id": "550e8400-e29b-41d4-a716-446655440000",
"data": {
"summary": "Customer requested refund for order #12345...",
"sentiment": "negative",
"category": "billing"
},
"meta": {
"provider": "openai",
"model": "gpt-4o",
"tokens": {
"prompt": 245,
"completion": 89,
"total": 334
},
"duration_ms": 2341,
"attempts": [
{
"provider": "openai",
"model": "gpt-4o",
"duration_ms": 2341,
"success": true
}
]
},
"customer_data": {
"user_id": "user_789",
"session_id": "sess_abc123"
},
"timestamp": "2026-01-05T12:34:56.789Z"
}
Headers sent with webhooks
| Header | Description |
|---|---|
mr-signature | HMAC-SHA256 signature of the payload |
mr-timestamp | Unix timestamp when the webhook was sent |
mr-channel-id | Unique identifier for this request |
content-type | Always application/json |
Verifying webhook signatures
Verify the mr-signature header to ensure the request came from ModelRiver:
Node.js (Express):
const crypto = require('crypto');
function verifyWebhookSignature(payload, signature, secret) {
const expectedSignature = crypto
.createHmac('sha256', secret)
.update(JSON.stringify(payload))
.digest('hex');
return crypto.timingSafeEqual(
Buffer.from(signature),
Buffer.from(expectedSignature)
);
}
app.post('/webhooks/ai', (req, res) => {
const signature = req.headers['mr-signature'];
const webhookSecret = process.env.MODELRIVER_WEBHOOK_SECRET;
if (!verifyWebhookSignature(req.body, signature, webhookSecret)) {
return res.status(401).json({ error: 'Invalid signature' });
}
// Process the webhook payload
const { type, data, customer_data } = req.body;
// Your business logic here
console.log('AI completed:', data);
res.status(200).json({ received: true });
});
Python (Flask):
import hmac
import hashlib
import json
from flask import Flask, request, jsonify
app = Flask(__name__)
def verify_webhook_signature(payload, signature, secret):
expected_signature = hmac.new(
secret.encode('utf-8'),
json.dumps(payload).encode('utf-8'),
hashlib.sha256
).hexdigest()
return hmac.compare_digest(signature, expected_signature)
@app.route('/webhooks/ai', methods=['POST'])
def handle_webhook():
signature = request.headers.get('mr-signature')
webhook_secret = os.environ['MODELRIVER_WEBHOOK_SECRET']
if not verify_webhook_signature(request.json, signature, webhook_secret):
return jsonify({'error': 'Invalid signature'}), 401
# Process the webhook payload
payload = request.json
event_type = payload['type']
data = payload['data']
# Your business logic here
print(f'AI completed: {data}')
return jsonify({'received': True}), 200
Event-driven workflows
Event-driven workflows enable a three-step flow:
- AI generates – ModelRiver processes the AI request
- You process – Your backend executes custom logic (database updates, tool calls, validation)
- Final response – ModelRiver broadcasts the completed result to WebSocket channels
This is ideal for scenarios where you need to:
- Execute tool/function calls based on AI output
- Validate and enrich AI responses with database data
- Implement multi-step workflows with approval gates
- Trigger side effects (notifications, database updates) before returning to users
Setting up event-driven workflows
1. Add an event name to your workflow
When creating or editing a workflow in the console, set the Event name field:
curl -X POST https://api.modelriver.com/api/console/workflow \
-H "Authorization: Bearer mr_live_your_key" \
-H "Content-Type: application/json" \
-d '{
"name": "movie_suggestion",
"event_name": "new_movie_suggestion",
"provider": "openai",
"model": "gpt-4o",
"structured_output_id": "schema_abc123"
}'
Or via the dashboard:
- Open Workflows in your project
- Click Create Workflow or edit an existing workflow
- In the Event-Driven Workflow section, enter an event name (e.g.,
new_movie_suggestion) - Save the workflow
2. Webhook payload for event-driven workflows
When a workflow with an event_name completes, ModelRiver sends a different payload structure:
{
"type": "task.ai_generated",
"event": "new_movie_suggestion",
"channel_id": "550e8400-e29b-41d4-a716-446655440000",
"ai_response": {
"data": {
"title": "Inception",
"year": 2010,
"director": "Christopher Nolan",
"genre": ["Sci-Fi", "Thriller"],
"rating": 8.8
}
},
"callback_url": "https://api.modelriver.com/api/v1/callback/550e8400-e29b-41d4-a716-446655440000",
"callback_required": true,
"meta": {
"workflow_id": "wf_abc123",
"workflow_name": "movie_suggestion",
"project_id": "proj_xyz789",
"provider": "openai",
"model": "gpt-4o"
},
"customer_data": {
"user_id": "user_456",
"preferences": "action,scifi"
},
"timestamp": "2026-01-05T12:34:56.789Z"
}
Key differences from standard webhooks:
typeistask.ai_generated(nottask.completed)eventcontains your custom event nameai_responsewraps the AI-generated datacallback_urlis provided for you to call back to ModelRivercallback_required: trueindicates ModelRiver is waiting for your callback
3. Process the AI response in your backend
Your webhook endpoint receives the AI response and can execute custom logic:
Node.js (Express) - Full Example:
const express = require('express');
const crypto = require('crypto');
const axios = require('axios');
const app = express();
app.use(express.json());
// Verify webhook signature
function verifyWebhookSignature(payload, signature, secret) {
const expectedSignature = crypto
.createHmac('sha256', secret)
.update(JSON.stringify(payload))
.digest('hex');
return crypto.timingSafeEqual(
Buffer.from(signature),
Buffer.from(expectedSignature)
);
}
// Handle event-driven webhook
app.post('/webhooks/ai', async (req, res) => {
const signature = req.headers['mr-signature'];
const webhookSecret = process.env.MODELRIVER_WEBHOOK_SECRET;
// 1. Verify the webhook signature
if (!verifyWebhookSignature(req.body, signature, webhookSecret)) {
return res.status(401).json({ error: 'Invalid signature' });
}
const { type, event, ai_response, callback_url, customer_data } = req.body;
// 2. Check if this is an event-driven workflow
if (type === 'task.ai_generated' && callback_url) {
// Respond immediately to acknowledge receipt
res.status(200).json({ received: true });
// 3. Process the AI response asynchronously
processEventDrivenWorkflow(event, ai_response, callback_url, customer_data)
.catch(error => {
console.error('Error processing event-driven workflow:', error);
// Send error to ModelRiver
axios.post(callback_url, {
error: 'processing_failed',
message: error.message,
}, {
headers: {
'Authorization': `Bearer ${process.env.MODELRIVER_API_KEY}`,
'Content-Type': 'application/json'
}
});
});
} else {
// Standard webhook (no event name)
const { data } = req.body;
console.log('Standard webhook received:', data);
res.status(200).json({ received: true });
}
});
async function processEventDrivenWorkflow(event, aiResponse, callbackUrl, customerData) {
console.log(`Processing event: ${event}`);
// Example: Add movie to database and generate recommendations
if (event === 'new_movie_suggestion') {
const movieData = aiResponse.data;
// 4. Execute your custom business logic
// - Save to database
const movie = await saveMovieToDatabase(movieData);
// - Generate recommendations based on user preferences
const recommendations = await generateRecommendations(
customerData.user_id,
movie.genre
);
// - Get streaming availability
const streamingOptions = await checkStreamingAvailability(movie.title);
// 5. Call back to ModelRiver with the enriched data
await axios.post(callbackUrl, {
data: {
// Original AI data
...movieData,
// Your enriched data
id: movie.id,
database_id: movie.database_id,
recommendations: recommendations,
streaming: streamingOptions,
processed_at: new Date().toISOString()
},
task_id: `movie_${movie.id}`,
metadata: {
processing_time_ms: 234,
sources_checked: 3,
recommendations_count: recommendations.length
}
}, {
headers: {
'Authorization': `Bearer ${process.env.MODELRIVER_API_KEY}`,
'Content-Type': 'application/json'
}
});
console.log(`✅ Callback sent for movie ${movie.id}`);
}
}
// Mock functions (implement your actual business logic)
async function saveMovieToDatabase(movieData) {
// Your database logic here
return {
id: 'mov_123',
database_id: 456,
...movieData
};
}
async function generateRecommendations(userId, genres) {
// Your recommendation engine here
return [
{ title: 'The Matrix', rating: 8.7 },
{ title: 'Interstellar', rating: 8.6 }
];
}
async function checkStreamingAvailability(title) {
// Check streaming services API
return {
netflix: true,
prime: false,
hulu: false
};
}
app.listen(3000, () => {
console.log('Webhook server running on port 3000');
});
Python (Django) - Full Example:
import hmac
import hashlib
import json
import requests
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
from django.views.decorators.http import require_http_methods
import os
import asyncio
from asgiref.sync import async_to_sync
def verify_webhook_signature(payload, signature, secret):
expected_signature = hmac.new(
secret.encode('utf-8'),
json.dumps(payload).encode('utf-8'),
hashlib.sha256
).hexdigest()
return hmac.compare_digest(signature, expected_signature)
@csrf_exempt
@require_http_methods(["POST"])
def webhook_handler(request):
# 1. Verify the webhook signature
signature = request.headers.get('mr-signature')
webhook_secret = os.environ['MODELRIVER_WEBHOOK_SECRET']
try:
payload = json.loads(request.body)
except json.JSONDecodeError:
return JsonResponse({'error': 'Invalid JSON'}, status=400)
if not verify_webhook_signature(payload, signature, webhook_secret):
return JsonResponse({'error': 'Invalid signature'}, status=401)
event_type = payload.get('type')
callback_url = payload.get('callback_url')
# 2. Check if this is an event-driven workflow
if event_type == 'task.ai_generated' and callback_url:
# Respond immediately to acknowledge receipt
# Process asynchronously in background task
process_event_driven_workflow.delay(
event=payload.get('event'),
ai_response=payload.get('ai_response'),
callback_url=callback_url,
customer_data=payload.get('customer_data', {})
)
return JsonResponse({'received': True}, status=200)
else:
# Standard webhook (no event name)
data = payload.get('data', {})
print(f'Standard webhook received: {data}')
return JsonResponse({'received': True}, status=200)
# Celery task or async function
def process_event_driven_workflow(event, ai_response, callback_url, customer_data):
"""Process the AI response with custom business logic"""
print(f'Processing event: {event}')
if event == 'new_movie_suggestion':
movie_data = ai_response['data']
# 3. Execute your custom business logic
# - Save to database
movie = save_movie_to_database(movie_data)
# - Generate recommendations
recommendations = generate_recommendations(
customer_data.get('user_id'),
movie['genre']
)
# - Check streaming availability
streaming_options = check_streaming_availability(movie['title'])
# 4. Call back to ModelRiver with enriched data
try:
response = requests.post(
callback_url,
json={
'data': {
**movie_data,
'id': movie['id'],
'database_id': movie['database_id'],
'recommendations': recommendations,
'streaming': streaming_options,
'processed_at': datetime.now().isoformat()
},
'task_id': f"movie_{movie['id']}",
'metadata': {
'processing_time_ms': 234,
'sources_checked': 3,
'recommendations_count': len(recommendations)
}
},
headers={
'Authorization': f"Bearer {os.environ['MODELRIVER_API_KEY']}",
'Content-Type': 'application/json'
},
timeout=10
)
response.raise_for_status()
print(f'✅ Callback sent for movie {movie["id"]}')
except requests.exceptions.RequestException as e:
print(f'❌ Callback failed: {e}')
# Send error to ModelRiver
requests.post(
callback_url,
json={
'error': 'processing_failed',
'message': str(e)
},
headers={
'Authorization': f"Bearer {os.environ['MODELRIVER_API_KEY']}",
'Content-Type': 'application/json'
}
)
# Mock functions (implement your actual logic)
def save_movie_to_database(movie_data):
# Your database logic
return {
'id': 'mov_123',
'database_id': 456,
**movie_data
}
def generate_recommendations(user_id, genres):
# Your recommendation engine
return [
{'title': 'The Matrix', 'rating': 8.7},
{'title': 'Interstellar', 'rating': 8.6}
]
def check_streaming_availability(title):
# Check streaming services API
return {
'netflix': True,
'prime': False,
'hulu': False
}
4. Callback API specification
After processing the AI response, call back to ModelRiver with the final data:
Endpoint: POST {callback_url}
Headers:
Authorization: Bearer {your_api_key}Content-Type: application/json
Success payload:
{
"data": {
"title": "Inception",
"year": 2010,
"director": "Christopher Nolan",
"genre": ["Sci-Fi", "Thriller"],
"rating": 8.8,
"id": "mov_123",
"database_id": 456,
"recommendations": [
{ "title": "The Matrix", "rating": 8.7 },
{ "title": "Interstellar", "rating": 8.6 }
],
"streaming": {
"netflix": true,
"prime": false
}
},
"task_id": "movie_123",
"metadata": {
"processing_time_ms": 234,
"sources_checked": 3
}
}
Error payload:
{
"error": "processing_failed",
"message": "Database connection timeout"
}
Response: ModelRiver returns 200 OK on successful callback.
5. Frontend receives final response
ModelRiver broadcasts the final response (including your enriched data) to the WebSocket channel. Your frontend using the ModelRiver Client SDK receives:
{
"status": "completed",
"data": {
"title": "Inception",
"year": 2010,
"director": "Christopher Nolan",
"genre": ["Sci-Fi", "Thriller"],
"rating": 8.8,
"id": "mov_123",
"database_id": 456,
"recommendations": [
{ "title": "The Matrix", "rating": 8.7 },
{ "title": "Interstellar", "rating": 8.6 }
],
"streaming": {
"netflix": true,
"prime": false
}
},
"ai_response": {
"data": {
"title": "Inception",
"year": 2010,
"director": "Christopher Nolan",
"genre": ["Sci-Fi", "Thriller"],
"rating": 8.8
}
},
"customer_data": {
"user_id": "user_456",
"preferences": "action,scifi"
}
}
Note: Both the enriched data (with your additions) and the original ai_response are available to the frontend.
Timeout handling
If your backend doesn't call back within 5 minutes, ModelRiver automatically:
- Sends a timeout error to the WebSocket channel
- Logs the timeout event
- Marks the request as failed
Testing event-driven workflows in the playground
The playground automatically simulates the complete event-driven flow when testing workflows with event_name set:
- AI generates the response
- "Simulating backend callback" message appears
- After ~1.5s delay, a simulated callback response is generated
- Final response is displayed with both original AI data and simulated enrichments
This helps you validate your workflow logic before implementing the actual webhook callback in production.
Webhook delivery and retries
Retry policy
ModelRiver implements exponential backoff with the following schedule:
| Attempt | Delay |
|---|---|
| 1 | Immediate |
| 2 | 5 seconds |
| 3 | 30 seconds |
| 4 | 2 minutes |
| 5 | 10 minutes |
| 6 | 30 minutes |
| 7 | 1 hour |
| 8 | 2 hours |
After 8 failed attempts, the webhook is moved to the Dead Letter Queue (DLQ) for manual inspection.
Successful delivery
Your endpoint should return a 2xx status code (preferably 200 OK) to acknowledge successful receipt.
Failed delivery
Any non-2xx status code, network timeout, or connection error triggers a retry.
Monitoring webhooks
All webhook deliveries are logged in your project's Request Logs:
- Timeline view shows each delivery attempt
- Status indicators mark success/failure
- Payload inspection lets you view the exact data sent
- Callback logs (for event-driven workflows) show your backend's response
Filter logs by event_name to isolate specific event-driven workflows.
Security best practices
- Always verify signatures – Never process webhooks without validating the
mr-signatureheader - Use HTTPS endpoints – ModelRiver only sends webhooks to
https://URLs in production - Implement idempotency – Use
channel_idto deduplicate webhook deliveries - Set reasonable timeouts – Respond to webhooks within 10 seconds; use background jobs for long-running tasks
- Rate limit – Protect your webhook endpoints from abuse or accidental loops
- Store secrets securely – Keep webhook signature secrets in environment variables, never in code
Common patterns
Pattern 1: Fire-and-forget notifications
Standard webhook (no event_name): Receive AI results and trigger side effects (send emails, update databases) without blocking the AI response.
Pattern 2: Tool/function calling workflows
Event-driven workflow with event_name: AI generates a plan, your backend executes tool calls, then you return the final result to the frontend.
Pattern 3: Approval workflows
Event-driven workflow: AI generates content, your backend routes it to an approval queue, human approves, then you call back with the approved content.
Pattern 4: Multi-stage processing
Event-driven workflow: Chain multiple processing steps (AI → validation → enrichment → formatting) before delivering to users.
Next steps
- Review Workflows to understand how to configure event-driven settings
- Check Client SDK for frontend WebSocket integration
- Explore API Integration for async request patterns
- See Dashboard Overview to monitor webhook deliveries