Build a real-time AI chatbot

Experience end-to-end streaming AI with webhooks, dedicated CLI for local development, and structured outputs.

Why this example matters

Traditional AI integrations require you to manage complex infrastructure: polling for responses, handling timeouts, managing connection states, and building custom streaming solutions. ModelRiver eliminates all of this.

With just a few lines of code, you get:

  • True end-to-end streaming — From user input to AI response, everything flows through WebSockets in real-time
  • Zero infrastructure overhead — No need to build or maintain your own streaming server
  • Automatic failover — Built-in provider fallback when AI models are unavailable
  • Structured outputs — Define JSON schemas and get perfectly formatted, type-safe responses
  • Event-driven callbacks — Inject custom logic between AI processing and response delivery

What you'll build

A full-stack chatbot application with:

FeatureDescription
Real-time chat UIModern React interface with instant message delivery
Async AI processingNon-blocking requests via ModelRiver's async API
WebSocket streamingLive response delivery using @modelriver/client SDK
Structured responsesAI responses with sentiment analysis, confidence scores, and action items
Custom ID injectionTrack conversations with your own UUIDs throughout the lifecycle

Architecture overview

React Frontend Node.js Server ModelRiver API
(Your App) (Your Backend) (AI Gateway)
AI Processing
(Background)
Webhook Event
+ ID Injection
Callback URL
(to ModelRiver)
WebSocket Stream
(Real-time Response)

The magic: event-driven callbacks

This is where ModelRiver truly shines. When you send an async AI request:

  1. ModelRiver processes your request and generates the AI response
  2. Before delivering to your frontend, ModelRiver sends a webhook to your backend
  3. Your backend enriches the response with custom IDs, validates data, or triggers other actions
  4. Your backend calls back to ModelRiver with the enriched payload
  5. ModelRiver streams the final response to your frontend via WebSocket

This pattern enables use cases impossible with traditional AI APIs:

  • Custom ID injection for database tracking
  • Response validation and filtering
  • Approval gates for sensitive content
  • Multi-step enrichment pipelines

Step-by-step setup guide

Follow these steps to get the chatbot running locally. This guide covers everything from cloning to your first AI response.

Prerequisites


Step 1: Clone and install

Bash
git clone https://github.com/modelriver/modelriver-chatbot-demo.git
cd modelriver-chatbot-demo
 
# Install backend dependencies
cd backend && npm install
 
# Install frontend dependencies
cd ../frontend && npm install

Step 2: Set up ModelRiver console

2.1 Create a project

  1. Go to console.modelriver.com
  2. Click New Project and give it a name (e.g., my-chatbot)

2.2 Connect an AI provider

  1. Navigate to Providers in your project
  2. Click Add Provider and select one (e.g., OpenAI, Anthropic)
  3. Enter your provider's API key and save

2.3 Create a structured output

This defines the JSON format for AI responses. Navigate to Structured OutputsCreate Structure.

  • Name: chatbot_response
  • Sample data: Paste this example response:
JSON
1{
2 "reply": "The AI's direct response",
3 "summary": "Brief summary of the conversation",
4 "sentiment": "positive | negative | neutral | mixed",
5 "confidence": 0.95,
6 "topics": ["topic1", "topic2"],
7 "action_items": [
8 { "task": "Description", "priority": "high | medium | low" }
9 ]
10}
  • Click Build schema from sample data — this auto-generates a typed JSON schema with examples for better AI output accuracy
  • Save the structure

2.4 Create a workflow

Navigate to WorkflowsCreate Workflow.

SettingValue
Namemr_chatbot_workflow (must match the default in the code)
ProviderSelect your connected provider
ModelChoose a model (e.g., gpt-5-mini, claude-haiku-4-5)
Structured outputSelect chatbot_response
Event namenew_chat (triggers webhook callbacks)

Save the workflow.


2.5 Create an API key

Navigate to API KeysCreate Key.

  1. Enter a key name (e.g., chatbot-dev)
  2. Set expiration (choose "Never" for development, or a shorter period for testing)
  3. Click Create Key
  4. Copy the key immediately — you won't be able to see it again

2.6 Create a webhook for local development

Navigate to WebhooksCreate Webhook.

  1. Select Localhost (CLI) as the webhook type — this enables the CLI to receive webhooks
  2. Leave the Secret field empty to auto-generate one, or enter your own
  3. Enable the webhook and click Create webhook
  4. Copy the secret — you'll need this for WEBHOOK_SECRET in your .env

Step 3: Configure environment

Create backend/.env with the values from Step 2:

Bash
MODELRIVER_API_KEY=your_api_key_from_step_2.5
PORT=4000
BACKEND_PUBLIC_URL=http://localhost:4000
WEBHOOK_SECRET=your_webhook_secret_from_step_2.6
EVENT_NAME=new_chat

Important: The EVENT_NAME must match the event name in your workflow.


Step 4: Set up local webhook forwarding

Your localhost:4000 isn't accessible from the internet, so ModelRiver can't send webhooks directly. The CLI solves this.

Install and authenticate:

Bash
npm install -g @modelriver/cli
modelriver login

Start listening:

Bash
modelriver forward

You'll see:

Connected to ModelRiver
Forwarding webhooks http://localhost:4000/webhook/modelriver

Note on security: For production, you should verify webhook signatures to ensure requests are genuinely from ModelRiver.


Step 5: Run the application

Open three terminal tabs:

Tab 1 — CLI (already running):

Bash
modelriver listen

Tab 2 — backend:

Bash
cd backend && npm start

Tab 3 — frontend:

Bash
cd frontend && npm run dev

Step 6: Test it!

  1. Open http://localhost:3006 in your browser
  2. Type a message and hit send
  3. Watch the magic:
    • Your message goes to your backend
    • Backend sends async request to ModelRiver
    • ModelRiver processes with AI and sends webhook
    • CLI forwards webhook to your localhost
    • Backend enriches response and calls back
    • Frontend receives structured response via WebSocket

You should see a formatted response with sentiment analysis, topics, and action items!


Key implementation details

Sending an AI request (backend)

JAVASCRIPT
1// POST /chat endpoint
2app.post('/chat', async (req, res) => {
3 const { message, workflow = 'mr_chatbot_workflow' } = req.body;
4
5 // Generate custom IDs for tracking
6 const conversationId = uuidv4();
7 const messageId = uuidv4();
8
9 // Send async request to ModelRiver
10 const response = await fetch(`${MODELRIVER_API_URL}/v1/ai/async`, {
11 method: 'POST',
12 headers: {
13 'Authorization': `Bearer ${MODELRIVER_API_KEY}`,
14 'Content-Type': 'application/json'
15 },
16 body: JSON.stringify({
17 workflow,
18 messages: [{ role: 'user', content: message }],
19 delivery_method: 'websocket',
20 webhook_url: `${BACKEND_PUBLIC_URL}/webhook/modelriver`,
21 events: ['webhook_received'],
22 metadata: { conversationId, messageId }
23 })
24 });
25
26 // Return WebSocket connection details to frontend
27 const data = await response.json();
28 res.json({
29 channel_id: data.channel_id,
30 websocket_url: data.websocket_url,
31 ws_token: data.ws_token
32 });
33});

Receiving AI responses (frontend)

JSX
1import { useModelRiver } from '@modelriver/client';
2 
3function ChatApp() {
4 const { connect, message, status } = useModelRiver();
5
6 const sendMessage = async (text) => {
7 // Get WebSocket details from your backend
8 const res = await fetch('/chat', {
9 method: 'POST',
10 body: JSON.stringify({ message: text })
11 });
12 const { websocket_url, ws_token, channel_id } = await res.json();
13
14 // Connect to ModelRiver WebSocket
15 connect({ websocket_url, ws_token, channel_id });
16 };
17
18 // message updates in real-time as AI responds
19 return (
20 <div>
21 <div className="ai-response">{message}</div>
22 <button onClick={() => sendMessage('Hello!')}>Send</button>
23 </div>
24 );
25}

Processing webhooks (backend)

JAVASCRIPT
1// Webhook endpoint receives AI response before it reaches frontend
2app.post('/webhook/modelriver', async (req, res) => {
3 const { channel_id, ai_response, callback_url } = req.body;
4
5 // Retrieve your custom IDs
6 const pending = pendingRequests.get(channel_id);
7
8 // Enrich the response with your IDs
9 const enrichedData = {
10 id: pending.messageId,
11 conversation_id: pending.conversationId,
12 ...ai_response.data
13 };
14
15 // Send enriched data back to ModelRiver
16 await fetch(callback_url, {
17 method: 'POST',
18 headers: { 'Content-Type': 'application/json' },
19 body: JSON.stringify(enrichedData)
20 });
21
22 res.status(200).json({ success: true });
23});

Understanding the response

When structured output is configured (as in Step 2.3), the frontend automatically renders:

FieldDisplay
ReplyMain AI response (prominently displayed)
SentimentVisual indicator (positive, neutral, negative)
ConfidenceColor-coded progress bar
TopicsInteractive tag pills
Action itemsPrioritized list with color indicators

API reference

Backend endpoints

EndpointMethodDescription
/chatPOSTSend a chat message, returns WebSocket connection details
/webhook/modelriverPOSTReceives webhooks from ModelRiver
/conversations/:idGETGet conversation history
/healthGETHealth check endpoint

Request parameters

JSON
1{
2 "message": "User's message (required)",
3 "workflow": "Workflow name (optional, default: mr_chatbot_workflow)",
4 "conversationId": "Existing conversation ID (optional)"
5}

What makes this unique

CapabilityTraditional approachWith ModelRiver
Real-time streamingBuild custom WebSocket serverSDK handles everything
Local developmentngrok, localtunnel, etc.modelriver listen
Provider failoverManual implementationAutomatic, built-in
Response validationPost-processing on frontendEvent-driven callbacks
Structured outputComplex prompt engineeringDefine schema, get JSON
Custom ID trackingDifficult across servicesNative metadata injection

Next steps

Now that you have a working chatbot, explore these areas:


Resources