Documentation

Event-driven AI for backend frameworks

Receive AI-generated webhooks, execute custom business logic, and call back to ModelRiver: step-by-step guides for every major backend framework.

Overview

Every event-driven AI implementation follows the same pattern, regardless of framework:

  1. Register a webhook endpoint in ModelRiver that points to your backend
  2. Verify the signature on every incoming webhook
  3. Process the AI response: run custom logic (database, APIs, validation)
  4. Call back to the callback_url with your enriched data

The guides below show idiomatic implementations for each framework, including signature verification, async processing, error handling, and callback patterns.


Supported frameworks

FrameworkLanguageHighlightsGuide
Next.jsTypeScriptAPI routes, server actions, edge functionsView guide →
Nuxt.jsTypeScriptServer routes, Nitro engine, auto-importsView guide →
DjangoPythonViews, Celery tasks, Django REST FrameworkView guide →
FastAPIPythonAsync handlers, BackgroundTasks, PydanticView guide →
LaravelPHPQueued jobs, middleware, event broadcastingView guide →
RailsRubyActive Job, Action Controller, credentialsView guide →
PhoenixElixirGenServer, channels, Oban background jobsView guide →
Spring BootJavaRestController, async processing, WebClientView guide →
.NETC#Minimal APIs, hosted services, HttpClientView guide →

Common pattern

Every framework implementation follows this structure:

POST /webhooks/modelriver ModelRiver delivers AI result
Verify mr-signature header
Check type === "task.ai_generated"
Return 200 immediately (acknowledge receipt)
Background process:
Extract ai_response, event, callback_url, customer_data
Execute your custom business logic
POST enriched data to callback_url

Next steps