Installation
Install the SDK via your preferred package manager:
npm install @modelriver/client# oryarn add @modelriver/client# orpnpm add @modelriver/clientOr use the CDN for quick prototyping:
1<script src="https://cdn.modelriver.com/client/latest/modelriver.min.js"></script>How It Works
- Your backend calls the ModelRiver async API (e.g.
/api/v1/ai/async) to start a background AI request. - ModelRiver returns an async response with
channel_id, a short‑lived one‑timews_token, and WebSocket connection details. - Your frontend uses this SDK to connect via WebSocket using
channel_id+ws_tokenand receive streaming responses. - The SDK handles heartbeats, channel joins, and automatic reconnection for transient network issues (while the page is open).
- For page refresh recovery, use the persistence + reconnect helpers (
persist,hasPendingRequest,reconnect,reconnectWithBackend) together with your backend/api/v1/ai/reconnectendpoint.
Frontend Your Backend ModelRiver │ │ │ │ 1. Request AI │ │ │───────────────────>│ │ │ │ 2. POST /api/v1/ai/async │ │ │───────────────────>│ │ │ │ │ │ 3. channel_id, │ │ │ ws_token, │ │ │ websocket_url │ │ │<───────────────────│ │ 4. Return token │ │ │<───────────────────│ │ │ │ │ │ 5. SDK connects via WebSocket │ │─────────────────────────────────────────>│ │ │ │ │ 6. AI response streamed │ │<─────────────────────────────────────────│Quick Start
The fastest way to get started is to install the core library and use one of our first-class adapters.
npm install @modelriver/clientChoose your framework
| Integration | Description | Guide |
|---|---|---|
| React | useModelRiver hook for functional components | React Guide → |
| Vue | useModelRiver composable for Composition API | Vue Guide → |
| Angular | Observable-based ModelRiverService | Angular Guide → |
| Svelte | Real-time reactive stores for AI streaming | Svelte Guide → |
| Vanilla JS | Lightweight, zero-dependency browser SDK | Vanilla JS Guide → |
Core Concepts
1. Unified Real-time Stream
The SDK manages a persistent WebSocket connection to ModelRiver. It handles heartbeats, automatic reconnection, and state synchronization so your frontend always reflects the latest AI status.
2. Async Lifecycle
- Initiation: Your backend calls the ModelRiver async API.
- Delivery: ModelRiver returns a
ws_tokenandchannel_id. - Connection: Your frontend uses the SDK to connect with the token.
- Streaming: Response data and workflow steps stream in real-time.
3. Workflow Steps
The SDK tracks progress through four defined steps:
- queue: Request is being queued.
- process: AI is processing the prompt.
- receive: AI has finished; data is being delivered.
- complete: Request finished successfully.
Configuration Options
1interface ModelRiverClientOptions {2 baseUrl?: string; // WebSocket gateway (default: api.modelriver.com)3 apiBaseUrl?: string; // Your backend URL for recovery helpers4 debug?: boolean; // Enable console logging5 persist?: boolean; // Enable localStorage persistence6 storageKeyPrefix?: string; // Key prefix for storage7}Page Refresh Recovery
By default, the SDK persists active channel_ids to localStorage. If a user refreshes the page mid-stream, the SDK can automatically reconnect to the same session by obtaining a fresh token from your backend.
Refer to the Persistence & Recovery section for deep integration details.
Next Steps
- Review the API documentation for backend setup.
- Explore Workflows for model orchestration.
- Check Event-driven AI for webhook patterns.