Documentation

Migrate to ModelRiver in minutes

Switch from direct provider APIs to ModelRiver with minimal code changes. Get automatic failover, cost tracking, and structured outputs: for free.

Migrating to ModelRiver from a direct provider API typically requires changing just two lines of code: your base URL and API key. This guide covers migration paths for the most common providers.

Migrating from OpenAI

Step 1: Create a workflow

  1. Open the ModelRiver console
  2. Navigate to WorkflowsCreate Workflow
  3. Name it (e.g., my-gpt4-chat)
  4. Select OpenAI as the provider and your preferred model
  5. Optionally add a fallback provider (e.g., Anthropic Claude)
  6. Save

Step 2: Update your code

Before (direct OpenAI):

PYTHON
1from openai import OpenAI
2 
3client = OpenAI(
4 api_key="sk-YOUR_OPENAI_KEY"
5)
6 
7response = client.chat.completions.create(
8 model="gpt-4o",
9 messages=[{"role": "user", "content": "Hello"}]
10)

After (ModelRiver):

PYTHON
1from openai import OpenAI
2 
3client = OpenAI(
4 base_url="https://api.modelriver.com/v1", # ← changed
5 api_key="mr_live_YOUR_API_KEY" # ← changed
6)
7 
8response = client.chat.completions.create(
9 model="my-gpt4-chat", # ← workflow name
10 messages=[{"role": "user", "content": "Hello"}]
11)

Node.js

Before:

JAVASCRIPT
1import OpenAI from "openai";
2 
3const client = new OpenAI({
4 apiKey: "sk-YOUR_OPENAI_KEY",
5});

After:

JAVASCRIPT
1import OpenAI from "openai";
2 
3const client = new OpenAI({
4 baseURL: "https://api.modelriver.com/v1", // ← changed
5 apiKey: "mr_live_YOUR_API_KEY", // ← changed
6});

That's it: no other code changes needed. All response formats remain identical.


Migrating from Anthropic

Step 1: Create a workflow

Same as above, but select Anthropic as the provider and choose your Claude model.

Step 2: Update your code

Before (direct Anthropic):

PYTHON
1import anthropic
2 
3client = anthropic.Anthropic(api_key="sk-ant-YOUR_KEY")
4 
5response = client.messages.create(
6 model="claude-3-5-sonnet-20241022",
7 max_tokens=1024,
8 messages=[{"role": "user", "content": "Hello"}]
9)
10 
11print(response.content[0].text)

After (ModelRiver with OpenAI SDK):

PYTHON
1from openai import OpenAI
2 
3client = OpenAI(
4 base_url="https://api.modelriver.com/v1",
5 api_key="mr_live_YOUR_API_KEY"
6)
7 
8response = client.chat.completions.create(
9 model="my-claude-chat", # workflow configured with Claude
10 max_tokens=1024,
11 messages=[{"role": "user", "content": "Hello"}]
12)
13 
14print(response.choices[0].message.content)

Note: When migrating from Anthropic, you'll switch to the OpenAI response format. The response.content[0].text becomes response.choices[0].message.content.

Or use the native API for format-agnostic responses:

JAVASCRIPT
1const response = await fetch("https://api.modelriver.com/v1/ai", {
2 method: "POST",
3 headers: {
4 Authorization: "Bearer mr_live_YOUR_API_KEY",
5 "Content-Type": "application/json",
6 },
7 body: JSON.stringify({
8 workflow: "my-claude-chat",
9 max_tokens: 1024,
10 messages: [{ role: "user", content: "Hello" }],
11 }),
12});
13 
14const data = await response.json();

Migrating from LangChain

Before

PYTHON
1from langchain_openai import ChatOpenAI
2 
3llm = ChatOpenAI(
4 openai_api_key="sk-YOUR_OPENAI_KEY",
5 model="gpt-4o"
6)

After

PYTHON
1from langchain_openai import ChatOpenAI
2 
3llm = ChatOpenAI(
4 openai_api_base="https://api.modelriver.com/v1", # ← changed
5 openai_api_key="mr_live_YOUR_API_KEY", # ← changed
6 model="my-workflow" # ← workflow name
7)

Migrating from LlamaIndex

Before

PYTHON
1from llama_index.llms.openai import OpenAI
2 
3llm = OpenAI(
4 api_key="sk-YOUR_OPENAI_KEY",
5 model="gpt-4o"
6)

After

PYTHON
1from llama_index.llms.openai import OpenAI
2 
3llm = OpenAI(
4 api_base="https://api.modelriver.com/v1", # ← changed
5 api_key="mr_live_YOUR_API_KEY", # ← changed
6 model="my-workflow" # ← workflow name
7)

Migrating from Vercel AI SDK

Before

TYPESCRIPT
1import { createOpenAI } from "@ai-sdk/openai";
2 
3const openai = createOpenAI({
4 apiKey: "sk-YOUR_OPENAI_KEY",
5});

After

TYPESCRIPT
1import { createOpenAI } from "@ai-sdk/openai";
2 
3const modelriver = createOpenAI({
4 baseURL: "https://api.modelriver.com/v1", // ← changed
5 apiKey: "mr_live_YOUR_API_KEY", // ← changed
6});

What you get after migration

FeatureDirect providerModelRiver
Automatic failover❌ Manual✅ Automatic
Cost tracking❌ Manual✅ Built-in
Request logging❌ Build your own✅ Automatic
Structured outputs❌ Per-request✅ Workflow-level
Switch providers❌ Code change✅ Console toggle
Rate limit handling❌ Manual✅ Automatic
Webhook integration❌ Build your own✅ Built-in

Migration checklist

  • Create a workflow in the ModelRiver console
  • Generate a production API key
  • Update base_url / baseURL in your code
  • Update api_key / apiKey to your ModelRiver key
  • Update model to your workflow name
  • Test in development environment
  • Add fallback providers for production resilience
  • Configure structured outputs (optional)
  • Set up webhooks for async workflows (optional)
  • Deploy to production
  • Monitor via Request Logs

FAQ

Do I need to change response handling?

No. When using the OpenAI compatibility layer, responses are identical to OpenAI's format. Your existing response parsing code works unchanged.

Can I still use streaming?

Yes. Streaming works identically: set stream: true and receive SSE events in the same format. See Streaming.

What about function calling?

Yes. The tools and tool_choice parameters work the same way. See Function calling.

Will my tests break?

No, if you're mocking HTTP responses. The request/response format is identical. If you're making real API calls in tests, use a dedicated API key with a short expiration (e.g. 1 day).


Next steps