Documentation

CrewAI + ModelRiver

Multi-agent orchestration with per-agent model routing, automatic failover, and structured output enforcement across your entire crew.

Overview

CrewAI is a multi-agent orchestration framework where each "crew member" is an AI agent with a specific role, goal, and set of tools. By pointing CrewAI at ModelRiver, each agent can use a different model, have its own fallback chain, and produce structured outputs.

What you get:

  • Per-agent model routing through ModelRiver workflows
  • Automatic failover so no single-provider outage stops your crew
  • Cost tracking per agent and per task
  • Structured output schemas enforced at the workflow level

Quick start

Install dependencies

Bash
pip install crewai crewai-tools openai

Connect CrewAI to ModelRiver

CrewAI uses LiteLLM under the hood, which supports OpenAI-compatible endpoints:

PYTHON
1import os
2 
3os.environ["OPENAI_API_BASE"] = "https://api.modelriver.com/v1"
4os.environ["OPENAI_API_KEY"] = "mr_live_YOUR_API_KEY"

Or configure per-agent:

PYTHON
1from crewai import LLM
2 
3llm = LLM(
4 model="openai/my-chat-workflow",
5 base_url="https://api.modelriver.com/v1",
6 api_key="mr_live_YOUR_API_KEY",
7)

Basic crew

PYTHON
1from crewai import Agent, Task, Crew, LLM
2 
3# ModelRiver LLM
4llm = LLM(
5 model="openai/my-chat-workflow",
6 base_url="https://api.modelriver.com/v1",
7 api_key="mr_live_YOUR_API_KEY",
8)
9 
10# Define agents
11researcher = Agent(
12 role="Senior Researcher",
13 goal="Find and synthesise the most relevant information",
14 backstory="You are an expert research analyst with 20 years of experience.",
15 llm=llm,
16 verbose=True,
17)
18 
19writer = Agent(
20 role="Content Writer",
21 goal="Create compelling, well-structured content",
22 backstory="You are a seasoned tech writer who explains complex topics simply.",
23 llm=llm,
24 verbose=True,
25)
26 
27# Define tasks
28research_task = Task(
29 description="Research the latest trends in AI gateway platforms. Focus on routing, failover, and observability features.",
30 expected_output="A detailed research brief with key findings and sources.",
31 agent=researcher,
32)
33 
34write_task = Task(
35 description="Write a blog post based on the research brief. Make it engaging and informative.",
36 expected_output="A 500-word blog post in markdown format.",
37 agent=writer,
38)
39 
40# Assemble and run the crew
41crew = Crew(
42 agents=[researcher, writer],
43 tasks=[research_task, write_task],
44 verbose=True,
45)
46 
47result = crew.kickoff()
48print(result)

Per-agent workflows

Give each agent the optimal model for its role:

PYTHON
1# Fast model for research (lots of calls, needs speed)
2research_llm = LLM(
3 model="openai/fast-researcher", # GPT-4o-mini workflow
4 base_url="https://api.modelriver.com/v1",
5 api_key="mr_live_YOUR_API_KEY",
6)
7 
8# Powerful model for writing (quality matters)
9writer_llm = LLM(
10 model="openai/deep-writer", # Claude 3.5 Sonnet workflow
11 base_url="https://api.modelriver.com/v1",
12 api_key="mr_live_YOUR_API_KEY",
13)
14 
15# Analytical model for review (reasoning matters)
16reviewer_llm = LLM(
17 model="openai/strict-reviewer", # GPT-4o workflow
18 base_url="https://api.modelriver.com/v1",
19 api_key="mr_live_YOUR_API_KEY",
20)
21 
22researcher = Agent(role="Researcher", goal="...", backstory="...", llm=research_llm)
23writer = Agent(role="Writer", goal="...", backstory="...", llm=writer_llm)
24reviewer = Agent(role="Reviewer", goal="...", backstory="...", llm=reviewer_llm)

Change any agent's model in the ModelRiver console: no code changes needed.


Agents with tools

PYTHON
1from crewai_tools import SerperDevTool, WebsiteSearchTool
2 
3search_tool = SerperDevTool()
4web_tool = WebsiteSearchTool()
5 
6researcher = Agent(
7 role="Research Analyst",
8 goal="Find comprehensive information on the topic",
9 backstory="Expert at finding and synthesising information from multiple sources.",
10 tools=[search_tool, web_tool],
11 llm=llm,
12 verbose=True,
13)

Structured output with CrewAI

Use Pydantic models for structured task outputs:

PYTHON
1from pydantic import BaseModel, Field
2 
3class BlogPost(BaseModel):
4 title: str = Field(description="Blog post title")
5 summary: str = Field(description="One paragraph summary")
6 sections: list[str] = Field(description="List of section headings")
7 word_count: int = Field(description="Approximate word count")
8 
9write_task = Task(
10 description="Write a blog post about AI routing platforms.",
11 expected_output="Structured blog post data",
12 agent=writer,
13 output_pydantic=BlogPost,
14)
15 
16result = crew.kickoff()
17# result.pydantic contains a BlogPost instance
18print(result.pydantic.title)

Best practices

  1. One workflow per agent role: Researchers need speed, writers need quality, reviewers need reasoning
  2. Configure fallbacks per workflow: If Claude is down, the writer falls back automatically
  3. Monitor per-agent costs: Track which agents consume the most tokens in Request Logs
  4. Use structured outputs: Define output schemas in ModelRiver for consistent data formats
  5. Start with verbose mode: Debug agent interactions before turning off verbose logging

Next steps