Documentation

Real-time AI Streaming SDK

Connect your frontend directly to AI streams. Handle WebSocket lifecycles, real-time token delivery, and automatic reconnection with our first-class SDKs.

Installation

Install the SDK via your preferred package manager:

Bash
npm install @modelriver/client
# or
yarn add @modelriver/client
# or
pnpm add @modelriver/client

Or use the CDN for quick prototyping:

HTML
1<script src="https://cdn.modelriver.com/client/latest/modelriver.min.js"></script>

How It Works

  1. Your backend calls the ModelRiver async API (e.g. /api/v1/ai/async) to start a background AI request.
  2. ModelRiver returns an async response with channel_id, a short‑lived one‑time ws_token, and WebSocket connection details.
  3. Your frontend uses this SDK to connect via WebSocket using channel_id + ws_token and receive streaming responses.
  4. The SDK handles heartbeats, channel joins, and automatic reconnection for transient network issues (while the page is open).
  5. For page refresh recovery, use the persistence + reconnect helpers (persist, hasPendingRequest, reconnect, reconnectWithBackend) together with your backend /api/v1/ai/reconnect endpoint.
Frontend Your Backend ModelRiver
1. Request AI
>
2. POST /api/v1/ai/async
>
3. channel_id,
ws_token,
websocket_url
<
4. Return token
<
5. SDK connects via WebSocket
>
6. AI response streamed
<

Quick Start

The fastest way to get started is to install the core library and use one of our first-class adapters.

Bash
npm install @modelriver/client

Choose your framework

IntegrationDescriptionGuide
ReactuseModelRiver hook for functional componentsReact Guide →
VueuseModelRiver composable for Composition APIVue Guide →
AngularObservable-based ModelRiverServiceAngular Guide →
SvelteReal-time reactive stores for AI streamingSvelte Guide →
Vanilla JSLightweight, zero-dependency browser SDKVanilla JS Guide →

Core Concepts

1. Unified Real-time Stream

The SDK manages a persistent WebSocket connection to ModelRiver. It handles heartbeats, automatic reconnection, and state synchronization so your frontend always reflects the latest AI status.

2. Async Lifecycle

  1. Initiation: Your backend calls the ModelRiver async API.
  2. Delivery: ModelRiver returns a ws_token and channel_id.
  3. Connection: Your frontend uses the SDK to connect with the token.
  4. Streaming: Response data and workflow steps stream in real-time.

3. Workflow Steps

The SDK tracks progress through four defined steps:

  • queue: Request is being queued.
  • process: AI is processing the prompt.
  • receive: AI has finished; data is being delivered.
  • complete: Request finished successfully.

Configuration Options

TYPESCRIPT
1interface ModelRiverClientOptions {
2 baseUrl?: string; // WebSocket gateway (default: api.modelriver.com)
3 apiBaseUrl?: string; // Your backend URL for recovery helpers
4 debug?: boolean; // Enable console logging
5 persist?: boolean; // Enable localStorage persistence
6 storageKeyPrefix?: string; // Key prefix for storage
7}

Page Refresh Recovery

By default, the SDK persists active channel_ids to localStorage. If a user refreshes the page mid-stream, the SDK can automatically reconnect to the same session by obtaining a fresh token from your backend.

Refer to the Persistence & Recovery section for deep integration details.

Next Steps