Migrating to ModelRiver from a direct provider API typically requires changing just two lines of code: your base URL and API key. This guide covers migration paths for the most common providers.
Migrating from OpenAI
Step 1: Create a workflow
- Open the ModelRiver console
- Navigate to Workflows → Create Workflow
- Name it (e.g.,
my-gpt4-chat) - Select OpenAI as the provider and your preferred model
- Optionally add a fallback provider (e.g., Anthropic Claude)
- Save
Step 2: Update your code
Before (direct OpenAI):
1from openai import OpenAI2 3client = OpenAI(4 api_key="sk-YOUR_OPENAI_KEY"5)6 7response = client.chat.completions.create(8 model="gpt-4o",9 messages=[{"role": "user", "content": "Hello"}]10)After (ModelRiver):
1from openai import OpenAI2 3client = OpenAI(4 base_url="https://api.modelriver.com/v1", # ← changed5 api_key="mr_live_YOUR_API_KEY" # ← changed6)7 8response = client.chat.completions.create(9 model="my-gpt4-chat", # ← workflow name10 messages=[{"role": "user", "content": "Hello"}]11)Node.js
Before:
1import OpenAI from "openai";2 3const client = new OpenAI({4 apiKey: "sk-YOUR_OPENAI_KEY",5});After:
1import OpenAI from "openai";2 3const client = new OpenAI({4 baseURL: "https://api.modelriver.com/v1", // ← changed5 apiKey: "mr_live_YOUR_API_KEY", // ← changed6});That's it: no other code changes needed. All response formats remain identical.
Migrating from Anthropic
Step 1: Create a workflow
Same as above, but select Anthropic as the provider and choose your Claude model.
Step 2: Update your code
Before (direct Anthropic):
1import anthropic2 3client = anthropic.Anthropic(api_key="sk-ant-YOUR_KEY")4 5response = client.messages.create(6 model="claude-3-5-sonnet-20241022",7 max_tokens=1024,8 messages=[{"role": "user", "content": "Hello"}]9)10 11print(response.content[0].text)After (ModelRiver with OpenAI SDK):
1from openai import OpenAI2 3client = OpenAI(4 base_url="https://api.modelriver.com/v1",5 api_key="mr_live_YOUR_API_KEY"6)7 8response = client.chat.completions.create(9 model="my-claude-chat", # workflow configured with Claude10 max_tokens=1024,11 messages=[{"role": "user", "content": "Hello"}]12)13 14print(response.choices[0].message.content)Note: When migrating from Anthropic, you'll switch to the OpenAI response format. The
response.content[0].textbecomesresponse.choices[0].message.content.
Or use the native API for format-agnostic responses:
1const response = await fetch("https://api.modelriver.com/v1/ai", {2 method: "POST",3 headers: {4 Authorization: "Bearer mr_live_YOUR_API_KEY",5 "Content-Type": "application/json",6 },7 body: JSON.stringify({8 workflow: "my-claude-chat",9 max_tokens: 1024,10 messages: [{ role: "user", content: "Hello" }],11 }),12});13 14const data = await response.json();Migrating from LangChain
Before
1from langchain_openai import ChatOpenAI2 3llm = ChatOpenAI(4 openai_api_key="sk-YOUR_OPENAI_KEY",5 model="gpt-4o"6)After
1from langchain_openai import ChatOpenAI2 3llm = ChatOpenAI(4 openai_api_base="https://api.modelriver.com/v1", # ← changed5 openai_api_key="mr_live_YOUR_API_KEY", # ← changed6 model="my-workflow" # ← workflow name7)Migrating from LlamaIndex
Before
1from llama_index.llms.openai import OpenAI2 3llm = OpenAI(4 api_key="sk-YOUR_OPENAI_KEY",5 model="gpt-4o"6)After
1from llama_index.llms.openai import OpenAI2 3llm = OpenAI(4 api_base="https://api.modelriver.com/v1", # ← changed5 api_key="mr_live_YOUR_API_KEY", # ← changed6 model="my-workflow" # ← workflow name7)Migrating from Vercel AI SDK
Before
1import { createOpenAI } from "@ai-sdk/openai";2 3const openai = createOpenAI({4 apiKey: "sk-YOUR_OPENAI_KEY",5});After
1import { createOpenAI } from "@ai-sdk/openai";2 3const modelriver = createOpenAI({4 baseURL: "https://api.modelriver.com/v1", // ← changed5 apiKey: "mr_live_YOUR_API_KEY", // ← changed6});What you get after migration
| Feature | Direct provider | ModelRiver |
|---|---|---|
| Automatic failover | ❌ Manual | ✅ Automatic |
| Cost tracking | ❌ Manual | ✅ Built-in |
| Request logging | ❌ Build your own | ✅ Automatic |
| Structured outputs | ❌ Per-request | ✅ Workflow-level |
| Switch providers | ❌ Code change | ✅ Console toggle |
| Rate limit handling | ❌ Manual | ✅ Automatic |
| Webhook integration | ❌ Build your own | ✅ Built-in |
Migration checklist
- Create a workflow in the ModelRiver console
- Generate a production API key
- Update
base_url/baseURLin your code - Update
api_key/apiKeyto your ModelRiver key - Update
modelto your workflow name - Test in development environment
- Add fallback providers for production resilience
- Configure structured outputs (optional)
- Set up webhooks for async workflows (optional)
- Deploy to production
- Monitor via Request Logs
FAQ
Do I need to change response handling?
No. When using the OpenAI compatibility layer, responses are identical to OpenAI's format. Your existing response parsing code works unchanged.
Can I still use streaming?
Yes. Streaming works identically: set stream: true and receive SSE events in the same format. See Streaming.
What about function calling?
Yes. The tools and tool_choice parameters work the same way. See Function calling.
Will my tests break?
No, if you're mocking HTTP responses. The request/response format is identical. If you're making real API calls in tests, use a dedicated API key with a short expiration (e.g. 1 day).
Next steps
- OpenAI compatibility: Full SDK compatibility details
- Endpoints: Complete endpoint reference
- Getting started: Set up your first project