Documentation

Agent Framework Integrations

Build autonomous and multi-agent AI systems. Route every agent's LLM calls through ModelRiver for automatic failover, cost tracking, and full observability.

Overview

Agent frameworks enable autonomous AI systems that can use tools, make decisions, collaborate with other agents, and complete multi-step tasks. ModelRiver provides the LLM backbone for all of them: with automatic failover ensuring your agents never stall due to a provider outage.

Why use ModelRiver with agent frameworks?

  • Agent resilience: Provider failover keeps agents running even during outages
  • Per-agent cost tracking: Use different workflows per agent to track costs separately
  • Tool-call reliability: Function calling works across all providers
  • Full observability: See every agent step in your Request Logs

Supported frameworks

FrameworkLanguageUse CaseDifficultyGuide
AutoGenPythonMulti-agent conversations, group chat⭐⭐ MediumView guide →
CrewAIPythonRole-based agent crews, task delegation⭐⭐ MediumView guide →
OpenAI Agents SDKPythonTools, handoffs, guardrails⭐⭐ MediumView guide →
LangGraphPythonGraph-based agents, state machines⭐⭐ MediumView guide →

AutoGen

Microsoft's framework for multi-agent conversations. Agents can chat with each other, use tools, and collaborate on complex tasks.

PYTHON
1from autogen import OpenAIWrapper
2 
3config_list = [{
4 "base_url": "https://api.modelriver.com/v1",
5 "api_key": "mr_live_YOUR_API_KEY",
6 "model": "my-workflow",
7}]

Full AutoGen guide →


CrewAI

Build role-based agent crews where each agent has a specific role, goal, and backstory. Assign different ModelRiver workflows per agent for cost separation.

PYTHON
1from crewai import Agent, LLM
2 
3llm = LLM(
4 model="openai/my-workflow",
5 base_url="https://api.modelriver.com/v1",
6 api_key="mr_live_YOUR_API_KEY",
7)

Full CrewAI guide →


OpenAI Agents SDK

OpenAI's official SDK for building agents with tools, handoffs, and guardrails. Point it at ModelRiver to add failover and multi-provider support.

PYTHON
1from agents import Agent, Runner, set_default_openai_client
2from openai import AsyncOpenAI
3 
4client = AsyncOpenAI(
5 base_url="https://api.modelriver.com/v1",
6 api_key="mr_live_YOUR_API_KEY",
7)
8set_default_openai_client(client)

Full OpenAI Agents SDK guide →


LangGraph

Build graph-based agent workflows with branching, cycles, and persistent state. Combine with ModelRiver for resilient agent graphs.

PYTHON
1from langchain_openai import ChatOpenAI
2 
3llm = ChatOpenAI(
4 base_url="https://api.modelriver.com/v1",
5 api_key="mr_live_YOUR_API_KEY",
6 model="my-workflow",
7)

Full LangGraph guide →


Next steps