Why this example matters
Traditional AI integrations require you to manage complex infrastructure: polling for responses, handling timeouts, managing connection states, and building custom streaming solutions. ModelRiver eliminates all of this.
With just a few lines of code, you get:
- True end-to-end streaming — From user input to AI response, everything flows through WebSockets in real-time
- Zero infrastructure overhead — No need to build or maintain your own streaming server
- Automatic failover — Built-in provider fallback when AI models are unavailable
- Structured outputs — Define JSON schemas and get perfectly formatted, type-safe responses
- Event-driven callbacks — Inject custom logic between AI processing and response delivery
What you'll build
A full-stack chatbot application with:
| Feature | Description |
|---|---|
| Real-time chat UI | Modern React interface with instant message delivery |
| Async AI processing | Non-blocking requests via ModelRiver's async API |
| WebSocket streaming | Live response delivery using @modelriver/client SDK |
| Structured responses | AI responses with sentiment analysis, confidence scores, and action items |
| Custom ID injection | Track conversations with your own UUIDs throughout the lifecycle |
Architecture overview
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐│ React Frontend │────▶│ Node.js Server │────▶│ ModelRiver API ││ (Your App) │ │ (Your Backend) │ │ (AI Gateway) │└─────────────────┘ └─────────────────┘ └─────────────────┘ │ │ │ │ │ ▼ │ │ ┌─────────────────┐ │ │ │ AI Processing │ │ │ │ (Background) │ │ │ └─────────────────┘ │ │ │ │ ▼ │ │ ┌─────────────────┐ │ │ │ Webhook Event │◀─────────────┘ │ │ + ID Injection │ │ └─────────────────┘ │ │ │ ▼ │ ┌─────────────────┐ │ │ Callback URL │ │ │ (to ModelRiver)│ │ └─────────────────┘ │ │ └───────────────────────┘ WebSocket Stream (Real-time Response)The magic: event-driven callbacks
This is where ModelRiver truly shines. When you send an async AI request:
- ModelRiver processes your request and generates the AI response
- Before delivering to your frontend, ModelRiver sends a webhook to your backend
- Your backend enriches the response with custom IDs, validates data, or triggers other actions
- Your backend calls back to ModelRiver with the enriched payload
- ModelRiver streams the final response to your frontend via WebSocket
This pattern enables use cases impossible with traditional AI APIs:
- Custom ID injection for database tracking
- Response validation and filtering
- Approval gates for sensitive content
- Multi-step enrichment pipelines
Step-by-step setup guide
Follow these steps to get the chatbot running locally. This guide covers everything from cloning to your first AI response.
Prerequisites
- Node.js 16+
- A ModelRiver account (create one here)
Step 1: Clone and install
git clone https://github.com/modelriver/modelriver-chatbot-demo.gitcd modelriver-chatbot-demo # Install backend dependenciescd backend && npm install # Install frontend dependenciescd ../frontend && npm installStep 2: Set up ModelRiver console
2.1 Create a project
- Go to console.modelriver.com
- Click New Project and give it a name (e.g.,
my-chatbot)
2.2 Connect an AI provider
- Navigate to Providers in your project
- Click Add Provider and select one (e.g., OpenAI, Anthropic)
- Enter your provider's API key and save
2.3 Create a structured output
This defines the JSON format for AI responses. Navigate to Structured Outputs → Create Structure.
- Name:
chatbot_response - Sample data: Paste this example response:
1{2 "reply": "The AI's direct response",3 "summary": "Brief summary of the conversation",4 "sentiment": "positive | negative | neutral | mixed",5 "confidence": 0.95,6 "topics": ["topic1", "topic2"],7 "action_items": [8 { "task": "Description", "priority": "high | medium | low" }9 ]10}- Click Build schema from sample data — this auto-generates a typed JSON schema with examples for better AI output accuracy
- Save the structure
2.4 Create a workflow
Navigate to Workflows → Create Workflow.
| Setting | Value |
|---|---|
| Name | mr_chatbot_workflow (must match the default in the code) |
| Provider | Select your connected provider |
| Model | Choose a model (e.g., gpt-5-mini, claude-haiku-4-5) |
| Structured output | Select chatbot_response |
| Event name | new_chat (triggers webhook callbacks) |
Save the workflow.
2.5 Create an API key
Navigate to API Keys → Create Key.
- Enter a key name (e.g.,
chatbot-dev) - Set expiration (choose "Never" for development, or a shorter period for testing)
- Click Create Key
- Copy the key immediately — you won't be able to see it again
2.6 Create a webhook for local development
Navigate to Webhooks → Create Webhook.
- Select Localhost (CLI) as the webhook type — this enables the CLI to receive webhooks
- Leave the Secret field empty to auto-generate one, or enter your own
- Enable the webhook and click Create webhook
- Copy the secret — you'll need this for
WEBHOOK_SECRETin your.env
Step 3: Configure environment
Create backend/.env with the values from Step 2:
MODELRIVER_API_KEY=your_api_key_from_step_2.5PORT=4000BACKEND_PUBLIC_URL=http://localhost:4000WEBHOOK_SECRET=your_webhook_secret_from_step_2.6EVENT_NAME=new_chatImportant: The EVENT_NAME must match the event name in your workflow.
Step 4: Set up local webhook forwarding
Your localhost:4000 isn't accessible from the internet, so ModelRiver can't send webhooks directly. The CLI solves this.
Install and authenticate:
npm install -g @modelriver/climodelriver loginStart listening:
modelriver forwardYou'll see:
✓ Connected to ModelRiver✓ Forwarding webhooks → http://localhost:4000/webhook/modelriverNote on security: For production, you should verify webhook signatures to ensure requests are genuinely from ModelRiver.
Step 5: Run the application
Open three terminal tabs:
Tab 1 — CLI (already running):
modelriver listenTab 2 — backend:
cd backend && npm startTab 3 — frontend:
cd frontend && npm run devStep 6: Test it!
- Open
http://localhost:3006in your browser - Type a message and hit send
- Watch the magic:
- Your message goes to your backend
- Backend sends async request to ModelRiver
- ModelRiver processes with AI and sends webhook
- CLI forwards webhook to your localhost
- Backend enriches response and calls back
- Frontend receives structured response via WebSocket
You should see a formatted response with sentiment analysis, topics, and action items!
Key implementation details
Sending an AI request (backend)
1// POST /chat endpoint2app.post('/chat', async (req, res) => {3 const { message, workflow = 'mr_chatbot_workflow' } = req.body;4 5 // Generate custom IDs for tracking6 const conversationId = uuidv4();7 const messageId = uuidv4();8 9 // Send async request to ModelRiver10 const response = await fetch(`${MODELRIVER_API_URL}/v1/ai/async`, {11 method: 'POST',12 headers: {13 'Authorization': `Bearer ${MODELRIVER_API_KEY}`,14 'Content-Type': 'application/json'15 },16 body: JSON.stringify({17 workflow,18 messages: [{ role: 'user', content: message }],19 delivery_method: 'websocket',20 webhook_url: `${BACKEND_PUBLIC_URL}/webhook/modelriver`,21 events: ['webhook_received'],22 metadata: { conversationId, messageId }23 })24 });25 26 // Return WebSocket connection details to frontend27 const data = await response.json();28 res.json({29 channel_id: data.channel_id,30 websocket_url: data.websocket_url,31 ws_token: data.ws_token32 });33});Receiving AI responses (frontend)
1import { useModelRiver } from '@modelriver/client';2 3function ChatApp() {4 const { connect, message, status } = useModelRiver();5 6 const sendMessage = async (text) => {7 // Get WebSocket details from your backend8 const res = await fetch('/chat', {9 method: 'POST',10 body: JSON.stringify({ message: text })11 });12 const { websocket_url, ws_token, channel_id } = await res.json();13 14 // Connect to ModelRiver WebSocket15 connect({ websocket_url, ws_token, channel_id });16 };17 18 // message updates in real-time as AI responds19 return (20 <div>21 <div className="ai-response">{message}</div>22 <button onClick={() => sendMessage('Hello!')}>Send</button>23 </div>24 );25}Processing webhooks (backend)
1// Webhook endpoint receives AI response before it reaches frontend2app.post('/webhook/modelriver', async (req, res) => {3 const { channel_id, ai_response, callback_url } = req.body;4 5 // Retrieve your custom IDs6 const pending = pendingRequests.get(channel_id);7 8 // Enrich the response with your IDs9 const enrichedData = {10 id: pending.messageId,11 conversation_id: pending.conversationId,12 ...ai_response.data13 };14 15 // Send enriched data back to ModelRiver16 await fetch(callback_url, {17 method: 'POST',18 headers: { 'Content-Type': 'application/json' },19 body: JSON.stringify(enrichedData)20 });21 22 res.status(200).json({ success: true });23});Understanding the response
When structured output is configured (as in Step 2.3), the frontend automatically renders:
| Field | Display |
|---|---|
| Reply | Main AI response (prominently displayed) |
| Sentiment | Visual indicator (positive, neutral, negative) |
| Confidence | Color-coded progress bar |
| Topics | Interactive tag pills |
| Action items | Prioritized list with color indicators |
API reference
Backend endpoints
| Endpoint | Method | Description |
|---|---|---|
/chat | POST | Send a chat message, returns WebSocket connection details |
/webhook/modelriver | POST | Receives webhooks from ModelRiver |
/conversations/:id | GET | Get conversation history |
/health | GET | Health check endpoint |
Request parameters
1{2 "message": "User's message (required)",3 "workflow": "Workflow name (optional, default: mr_chatbot_workflow)",4 "conversationId": "Existing conversation ID (optional)"5}What makes this unique
| Capability | Traditional approach | With ModelRiver |
|---|---|---|
| Real-time streaming | Build custom WebSocket server | SDK handles everything |
| Local development | ngrok, localtunnel, etc. | modelriver listen |
| Provider failover | Manual implementation | Automatic, built-in |
| Response validation | Post-processing on frontend | Event-driven callbacks |
| Structured output | Complex prompt engineering | Define schema, get JSON |
| Custom ID tracking | Difficult across services | Native metadata injection |
Next steps
Now that you have a working chatbot, explore these areas:
- Add more providers — Configure fallback providers for reliability
- Secure your webhooks — Essential for production
- Customize the schema — Add fields relevant to your use case
- Explore request logs — Debug and monitor in the console
- Deep dive into the SDK — Advanced hooks and configuration
Resources
- GitHub repository — Full source code
- CLI documentation — All CLI commands and options
- Client SDK — React hooks reference
- Webhooks guide — Event-driven patterns
- Workflows — Building AI pipelines