Overview
The OpenAI Agents SDK provides a lightweight, production-ready framework for building autonomous agents with tool use, handoffs between agents, and input/output guardrails. By pointing it at ModelRiver, your agents can use any provider: not just OpenAI: with automatic failover.
What you get:
- Run the Agents SDK against any provider (OpenAI, Anthropic, Mistral, etc.)
- Automatic failover during multi-step agent loops
- Per-step cost tracking in Request Logs
- Provider switching from the console without code changes
Quick start
Install dependencies
Bash
pip install openai-agentsConnect to ModelRiver
PYTHON
1from agents import Agent, Runner2from openai import AsyncOpenAI3 4client = AsyncOpenAI(5 base_url="https://api.modelriver.com/v1",6 api_key="mr_live_YOUR_API_KEY",7)8 9agent = Agent(10 name="Assistant",11 instructions="You are a helpful assistant that answers questions clearly.",12 model="my-chat-workflow",13)14 15result = Runner.run_sync(agent, "What is ModelRiver?", client=client)16print(result.final_output)Agent with tools
PYTHON
1from agents import Agent, Runner, function_tool2from openai import AsyncOpenAI3 4client = AsyncOpenAI(5 base_url="https://api.modelriver.com/v1",6 api_key="mr_live_YOUR_API_KEY",7)8 9@function_tool10def get_weather(location: str) -> str:11 """Get the current weather for a location."""12 return f"22°C and sunny in {location}"13 14@function_tool15def search_knowledge_base(query: str) -> str:16 """Search the internal knowledge base."""17 return f"Found: ModelRiver routes AI requests across providers."18 19agent = Agent(20 name="Research Assistant",21 instructions="Help users find information. Use tools when needed.",22 model="my-chat-workflow",23 tools=[get_weather, search_knowledge_base],24)25 26result = Runner.run_sync(agent, "What's the weather in Tokyo?", client=client)27print(result.final_output)Agent handoffs
Build multi-agent systems where agents delegate to specialists:
PYTHON
1from agents import Agent, Runner2from openai import AsyncOpenAI3 4client = AsyncOpenAI(5 base_url="https://api.modelriver.com/v1",6 api_key="mr_live_YOUR_API_KEY",7)8 9# Specialist agents10billing_agent = Agent(11 name="Billing Specialist",12 instructions="You handle billing questions. Be precise about pricing.",13 model="my-chat-workflow",14)15 16tech_agent = Agent(17 name="Tech Support",18 instructions="You handle technical questions. Provide code examples.",19 model="my-chat-workflow",20)21 22# Triage agent that delegates23triage_agent = Agent(24 name="Triage",25 instructions="Route the user to the right specialist based on their question.",26 model="my-chat-workflow",27 handoffs=[billing_agent, tech_agent],28)29 30result = Runner.run_sync(triage_agent, "How much does the Pro plan cost?", client=client)31print(result.final_output)Guardrails
Add input and output validation:
PYTHON
1from agents import Agent, Runner, InputGuardrail, OutputGuardrail, GuardrailFunctionOutput2from pydantic import BaseModel3 4class SafetyCheck(BaseModel):5 is_safe: bool6 reason: str7 8async def check_input(ctx, agent, input_text):9 # Use a fast workflow for guardrail checks10 result = await Runner.run(11 Agent(12 name="Safety Checker",13 instructions="Check if the input is safe and appropriate. Return is_safe=false for harmful content.",14 model="fast-safety-check", # Cheap, fast workflow15 output_type=SafetyCheck,16 ),17 input_text,18 context=ctx,19 )20 return GuardrailFunctionOutput(21 output_info=result.final_output,22 tripwire_triggered=not result.final_output.is_safe,23 )24 25agent = Agent(26 name="Guarded Assistant",27 instructions="You are a helpful assistant.",28 model="my-chat-workflow",29 input_guardrails=[InputGuardrail(guardrail_function=check_input)],30)Streaming
PYTHON
1import asyncio2from agents import Agent, Runner3from openai import AsyncOpenAI4 5client = AsyncOpenAI(6 base_url="https://api.modelriver.com/v1",7 api_key="mr_live_YOUR_API_KEY",8)9 10agent = Agent(11 name="Storyteller",12 instructions="Tell creative, engaging stories.",13 model="my-chat-workflow",14)15 16async def main():17 result = Runner.run_streamed(agent, "Tell me a story about AI", client=client)18 async for event in result.stream_events():19 if event.type == "raw_response_event" and hasattr(event.data, "delta"):20 print(event.data.delta, end="", flush=True)21 22asyncio.run(main())Per-agent model routing
Use different ModelRiver workflows for different agent roles:
PYTHON
1# Fast model for triage (speed matters)2triage_agent = Agent(name="Triage", model="fast-triage", ...)3 4# Powerful model for complex answers5expert_agent = Agent(name="Expert", model="deep-expert", ...)6 7# Cheap model for guardrails8safety_agent = Agent(name="Safety", model="fast-safety-check", ...)Best practices
- Use different workflows per agent: Triage agents need speed, experts need depth
- Configure guardrails with cheap models: Safety checks should use fast, inexpensive workflows
- Monitor handoff chains: Check Request Logs for multi-step costs
- Set max turns: Prevent runaway agent loops that generate excessive costs
- Use structured outputs: Define output schemas for agents that produce structured data
Next steps
- LangGraph integration: Graph-based agent workflows
- CrewAI integration: Multi-agent orchestration
- API reference: Endpoint documentation