Overview
LLM frameworks provide high-level abstractions for building AI-powered applications: chains, agents, RAG pipelines, and multi-step workflows. ModelRiver works with all major frameworks through its OpenAI-compatible endpoint.
Why use ModelRiver with an LLM framework?
- Automatic failover: If one provider goes down, your chains and pipelines keep running
- Cost tracking: See per-request token costs across all framework calls
- Provider flexibility: Switch between GPT-4, Claude, Gemini without changing framework code
- Observability: Every framework request appears in your Request Logs
Supported frameworks
| Framework | Language | Use Case | Difficulty | Guide |
|---|---|---|---|---|
| LangChain | Python | Chains, agents, RAG, tool calling | ⭐ Easy | View guide → |
| LlamaIndex | Python | Document QA, knowledge bases, chat engines | ⭐ Easy | View guide → |
| Haystack | Python | Search pipelines, custom components | ⭐⭐ Medium | View guide → |
How it works
All three frameworks use OpenAI-compatible clients under the hood. To route through ModelRiver, you simply change two configuration values:
- Base URL →
https://api.modelriver.com/v1 - API key → Your ModelRiver API key
- Model name → Your ModelRiver workflow slug
That's it: no framework plugins, no custom adapters, no code changes beyond configuration.
LangChain
Python's most popular LLM orchestration framework. Use ModelRiver as a drop-in ChatOpenAI replacement for chains, agents, RAG pipelines, and tool-calling workflows.
1from langchain_openai import ChatOpenAI2 3llm = ChatOpenAI(4 base_url="https://api.modelriver.com/v1",5 api_key="mr_live_YOUR_API_KEY",6 model="my-workflow",7)8 9response = llm.invoke("Explain quantum computing in simple terms")LlamaIndex
Build document QA systems, knowledge bases, and chat engines with LlamaIndex. ModelRiver handles the LLM and embedding calls with automatic failover.
1from llama_index.llms.openai import OpenAI2 3llm = OpenAI(4 api_base="https://api.modelriver.com/v1",5 api_key="mr_live_YOUR_API_KEY",6 model="my-workflow",7)Haystack
Build custom search and NLP pipelines with Haystack's component-based architecture. Route all LLM calls through ModelRiver for provider failover.
1from haystack.components.generators.chat import OpenAIChatGenerator2 3generator = OpenAIChatGenerator(4 api_base_url="https://api.modelriver.com/v1",5 api_key=Secret.from_token("mr_live_YOUR_API_KEY"),6 model="my-workflow",7)Next steps
- Agent Frameworks: Multi-agent systems with AutoGen, CrewAI, and more
- Backend Frameworks: Build AI-powered web applications
- API reference: Endpoint documentation