Documentation

Event-driven workflows

Execute custom business logic between AI generation and final response. Enrich, validate, and transform AI output before delivery.

Three-step flow

Event-driven workflows enable a powerful processing pipeline:

  1. AI generates – ModelRiver processes the AI request with your configured provider
  2. You process – Your backend receives the AI output and executes custom logic (database updates, tool calls, validation, enrichment)
  3. Final response – You call back to ModelRiver, which broadcasts the completed result to WebSocket channels

When to use event-driven workflows

  • Execute tool/function calls based on AI output
  • Validate and enrich AI responses with database data
  • Implement multi-step workflows with approval gates
  • Trigger side effects (notifications, database updates) before returning to users

Setting up

1. Add an event name to your workflow

When creating or editing a workflow in the console, set the Event name field. Or via the API:

Bash
curl -X POST https://api.modelriver.com/console/workflow \
-H "Authorization: Bearer mr_live_your_key" \
-H "Content-Type: application/json" \
-d '{
"name": "movie_suggestion",
"event_name": "new_movie_suggestion",
"provider": "openai",
"model": "gpt-4o",
"structured_output_id": "schema_abc123"
}'

2. Webhook payload

When a workflow with an event_name completes, ModelRiver sends a payload with these key differences from standard webhooks:

  • type is task.ai_generated (not task.completed)
  • event contains your custom event name
  • ai_response wraps the AI-generated data
  • callback_url is provided for you to call back to ModelRiver
  • callback_required: true indicates ModelRiver is waiting for your callback
JSON
1{
2 "type": "task.ai_generated",
3 "event": "new_movie_suggestion",
4 "channel_id": "550e8400-e29b-41d4-a716-446655440000",
5 "ai_response": {
6 "data": {
7 "title": "Inception",
8 "year": 2010,
9 "director": "Christopher Nolan"
10 }
11 },
12 "callback_url": "https://api.modelriver.com/v1/callback/550e8400...",
13 "callback_required": true,
14 "customer_data": {
15 "user_id": "user_456"
16 },
17 "timestamp": "2026-01-05T12:34:56.789Z"
18}

3. Process and call back

After executing your custom logic, call back to ModelRiver with the enriched data:

JAVASCRIPT
1// Your backend processes the AI response
2const enrichedData = await processAIResponse(aiResponse.data);
3 
4// Call back to ModelRiver
5await axios.post(callbackUrl, {
6 data: enrichedData,
7 task_id: "movie_123",
8 metadata: { processing_time_ms: 234 }
9}, {
10 headers: {
11 'Authorization': `Bearer ${process.env.MODELRIVER_API_KEY}`,
12 'Content-Type': 'application/json'
13 }
14});

4. Frontend receives final response

ModelRiver broadcasts the enriched response to the WebSocket channel. Your frontend receives both the original ai_response and your enriched data.

Timeout handling

If your backend doesn't call back within 5 minutes, ModelRiver automatically:

  1. Sends a timeout error to the WebSocket channel
  2. Logs the timeout event
  3. Marks the request as failed

Testing in the playground

The playground automatically simulates the complete event-driven flow when testing workflows with event_name set. After ~1.5s delay, a simulated callback response is generated so you can validate your workflow logic before production.

Next steps