Documentation

Next.js + ModelRiver

Streaming AI chat, server actions, and edge functions: all routed through ModelRiver for failover, cost tracking, and structured outputs.

Overview

Next.js is the leading full-stack React framework. Combined with the Vercel AI SDK and ModelRiver, you can build production-grade AI chat interfaces, streaming UIs, and AI-powered server actions in minutes.

What you get:

  • Streaming chat UIs with the Vercel AI SDK
  • Server-side AI calls that never expose your API key
  • Edge function support for low-latency responses
  • Automatic failover across providers

Quick start

Install dependencies

Bash
npx create-next-app@latest my-ai-app
cd my-ai-app
npm install ai @ai-sdk/openai openai

Environment variables

Bash
# .env.local
MODELRIVER_API_KEY=mr_live_YOUR_API_KEY

Streaming chat with Vercel AI SDK

API route

TYPESCRIPT
1// app/api/chat/route.ts
2import { createOpenAI } from "@ai-sdk/openai";
3import { streamText } from "ai";
4 
5const modelriver = createOpenAI({
6 baseURL: "https://api.modelriver.com/v1",
7 apiKey: process.env.MODELRIVER_API_KEY!,
8});
9 
10export async function POST(req: Request) {
11 const { messages } = await req.json();
12 
13 const result = streamText({
14 model: modelriver("my-chat-workflow"),
15 messages,
16 });
17 
18 return result.toDataStreamResponse();
19}

Chat component

TSX
1// app/page.tsx
2"use client";
3 
4import { useChat } from "ai/react";
5 
6export default function Chat() {
7 const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat();
8 
9 return (
10 <div className="max-w-2xl mx-auto p-4">
11 <div className="space-y-4 mb-4">
12 {messages.map((m) => (
13 <div
14 key={m.id}
15 className={`p-4 rounded-lg ${
16 m.role === "user" ? "bg-blue-100 ml-12" : "bg-gray-100 mr-12"
17 }`}
18 >
19 <p className="text-sm font-medium mb-1">
20 {m.role === "user" ? "You" : "AI"}
21 </p>
22 <p>{m.content}</p>
23 </div>
24 ))}
25 </div>
26 
27 <form onSubmit={handleSubmit} className="flex gap-2">
28 <input
29 value={input}
30 onChange={handleInputChange}
31 placeholder="Type a message..."
32 className="flex-1 p-2 border rounded-lg"
33 disabled={isLoading}
34 />
35 <button
36 type="submit"
37 disabled={isLoading}
38 className="px-4 py-2 bg-blue-600 text-white rounded-lg disabled:opacity-50"
39 >
40 Send
41 </button>
42 </form>
43 </div>
44 );
45}

Server actions

TYPESCRIPT
1// app/actions.ts
2"use server";
3 
4import { createOpenAI } from "@ai-sdk/openai";
5import { generateText, generateObject } from "ai";
6import { z } from "zod";
7 
8const modelriver = createOpenAI({
9 baseURL: "https://api.modelriver.com/v1",
10 apiKey: process.env.MODELRIVER_API_KEY!,
11});
12 
13export async function summarise(text: string) {
14 const { text: summary } = await generateText({
15 model: modelriver("my-summary-workflow"),
16 prompt: `Summarise this text in one paragraph:\n\n${text}`,
17 });
18 return summary;
19}
20 
21export async function extractEntities(text: string) {
22 const { object } = await generateObject({
23 model: modelriver("my-extraction-workflow"),
24 schema: z.object({
25 people: z.array(z.string()),
26 places: z.array(z.string()),
27 dates: z.array(z.string()),
28 }),
29 prompt: `Extract entities from:\n\n${text}`,
30 });
31 return object;
32}

API route (non-streaming)

TYPESCRIPT
1// app/api/generate/route.ts
2import OpenAI from "openai";
3 
4const client = new OpenAI({
5 baseURL: "https://api.modelriver.com/v1",
6 apiKey: process.env.MODELRIVER_API_KEY!,
7});
8 
9export async function POST(req: Request) {
10 const { prompt } = await req.json();
11 
12 const completion = await client.chat.completions.create({
13 model: "my-chat-workflow",
14 messages: [{ role: "user", content: prompt }],
15 });
16 
17 return Response.json({
18 content: completion.choices[0].message.content,
19 });
20}

Edge function

TYPESCRIPT
1// app/api/edge-chat/route.ts
2import { createOpenAI } from "@ai-sdk/openai";
3import { streamText } from "ai";
4 
5export const runtime = "edge";
6 
7const modelriver = createOpenAI({
8 baseURL: "https://api.modelriver.com/v1",
9 apiKey: process.env.MODELRIVER_API_KEY!,
10});
11 
12export async function POST(req: Request) {
13 const { messages } = await req.json();
14 
15 const result = streamText({
16 model: modelriver("my-chat-workflow"),
17 messages,
18 });
19 
20 return result.toDataStreamResponse();
21}

Best practices

  1. Never expose API keys: Always call ModelRiver from server-side routes or server actions
  2. Use streaming for chat: The Vercel AI SDK handles SSE parsing automatically
  3. Use generateObject for structured data: Combine with ModelRiver structured outputs for double validation
  4. Deploy to edge for lowest latency: ModelRiver endpoints are globally distributed
  5. Monitor in Request Logs: Track per-route costs in Observability

Next steps