ModelRiver supports OpenAI-compatible function calling via tools and tool_choice. Tools are passed through to the underlying provider (OpenAI, xAI, Mistral), and tool calls are returned in standard OpenAI format.
How function calling works
- Define tools: Describe the functions your application can execute
- Send the request: Include tool definitions alongside your messages
- Receive tool_calls: The model returns structured function calls instead of text
- Execute functions: Your application runs the requested functions
- Return results: Send function results back to continue the conversation
Basic example
Python
1from openai import OpenAI2 3client = OpenAI(4 base_url="https://api.modelriver.com/v1",5 api_key="mr_live_YOUR_API_KEY"6)7 8response = client.chat.completions.create(9 model="my_workflow",10 messages=[{"role": "user", "content": "What's the weather in Paris?"}],11 tools=[{12 "type": "function",13 "function": {14 "name": "get_weather",15 "description": "Get the current weather for a location",16 "parameters": {17 "type": "object",18 "properties": {19 "location": {20 "type": "string",21 "description": "City name"22 },23 "unit": {24 "type": "string",25 "enum": ["celsius", "fahrenheit"],26 "description": "Temperature unit"27 }28 },29 "required": ["location"]30 }31 }32 }],33 tool_choice="auto"34)35 36# Check if the model wants to call a function37message = response.choices[0].message38if message.tool_calls:39 for tool_call in message.tool_calls:40 print(f"Function: {tool_call.function.name}")41 print(f"Arguments: {tool_call.function.arguments}")42else:43 print(message.content)Node.js
1import OpenAI from "openai";2 3const client = new OpenAI({4 baseURL: "https://api.modelriver.com/v1",5 apiKey: "mr_live_YOUR_API_KEY",6});7 8const response = await client.chat.completions.create({9 model: "my_workflow",10 messages: [{ role: "user", content: "What's the weather in Paris?" }],11 tools: [{12 type: "function",13 function: {14 name: "get_weather",15 description: "Get the current weather for a location",16 parameters: {17 type: "object",18 properties: {19 location: { type: "string", description: "City name" },20 unit: { type: "string", enum: ["celsius", "fahrenheit"] },21 },22 required: ["location"],23 },24 },25 }],26 tool_choice: "auto",27});28 29const message = response.choices[0].message;30if (message.tool_calls) {31 for (const toolCall of message.tool_calls) {32 console.log(`Function: ${toolCall.function.name}`);33 console.log(`Arguments: ${toolCall.function.arguments}`);34 }35} else {36 console.log(message.content);37}Response with tool_calls
When the model decides to call a function, the response includes a tool_calls array:
1{2 "choices": [{3 "index": 0,4 "message": {5 "role": "assistant",6 "content": null,7 "tool_calls": [{8 "id": "call_abc123",9 "type": "function",10 "function": {11 "name": "get_weather",12 "arguments": "{\"location\": \"Paris\", \"unit\": \"celsius\"}"13 }14 }]15 },16 "finish_reason": "tool_calls"17 }]18}Response fields
| Field | Description |
|---|---|
tool_calls[].id | Unique identifier for this tool call |
tool_calls[].type | Always "function" |
tool_calls[].function.name | Name of the function to execute |
tool_calls[].function.arguments | JSON string of function arguments |
finish_reason | Set to "tool_calls" when the model wants to call functions |
content | Typically null when tool_calls are present |
Multi-turn tool conversations
After executing the requested function, send the result back to continue the conversation:
1import json2 3# Step 1: Initial request with tools4messages = [{"role": "user", "content": "What's the weather in Paris?"}]5 6response = client.chat.completions.create(7 model="my_workflow",8 messages=messages,9 tools=tools10)11 12message = response.choices[0].message13 14# Step 2: Execute the function15if message.tool_calls:16 # Add the assistant's message (with tool_calls)17 messages.append(message)18 19 for tool_call in message.tool_calls:20 # Execute your function21 if tool_call.function.name == "get_weather":22 args = json.loads(tool_call.function.arguments)23 result = get_weather(args["location"]) # Your function24 25 # Add the function result26 messages.append({27 "role": "tool",28 "tool_call_id": tool_call.id,29 "content": json.dumps(result)30 })31 32 # Step 3: Get the final response33 final_response = client.chat.completions.create(34 model="my_workflow",35 messages=messages,36 tools=tools37 )38 39 print(final_response.choices[0].message.content)40 # Output: "The weather in Paris is currently 18°C and partly cloudy."Tool choice options
Control how the model uses tools with the tool_choice parameter:
| Value | Behaviour |
|---|---|
"auto" | Model decides whether to call a function (default) |
"none" | Model will not call any functions |
"required" | Model must call at least one function |
{"type": "function", "function": {"name": "get_weather"}} | Force a specific function |
1# Force the model to call a specific function2response = client.chat.completions.create(3 model="my_workflow",4 messages=messages,5 tools=tools,6 tool_choice={7 "type": "function",8 "function": {"name": "get_weather"}9 }10)Multiple tools
Define multiple tools and let the model choose which to call:
1tools = [2 {3 "type": "function",4 "function": {5 "name": "get_weather",6 "description": "Get weather for a location",7 "parameters": {8 "type": "object",9 "properties": {10 "location": {"type": "string"}11 },12 "required": ["location"]13 }14 }15 },16 {17 "type": "function",18 "function": {19 "name": "search_restaurants",20 "description": "Search for restaurants near a location",21 "parameters": {22 "type": "object",23 "properties": {24 "location": {"type": "string"},25 "cuisine": {"type": "string"},26 "max_results": {"type": "integer", "default": 5}27 },28 "required": ["location"]29 }30 }31 },32 {33 "type": "function",34 "function": {35 "name": "book_reservation",36 "description": "Book a table at a restaurant",37 "parameters": {38 "type": "object",39 "properties": {40 "restaurant_id": {"type": "string"},41 "date": {"type": "string", "format": "date"},42 "party_size": {"type": "integer"},43 "time": {"type": "string"}44 },45 "required": ["restaurant_id", "date", "party_size"]46 }47 }48 }49]The model may call multiple tools in a single response. Handle each tool call individually and return all results.
Provider support
Function calling support varies by provider and model:
| Provider | Supported models | Notes |
|---|---|---|
| OpenAI | GPT-4, GPT-4o, GPT-3.5-turbo | Full support |
| xAI | Grok | Full support |
| Mistral | Mistral Large, Mixtral | Full support |
| Anthropic | Claude 3+ | Uses different format; not yet supported via this adapter |
| Gemini | Uses different format; not yet supported via this adapter |
Note: If your workflow's underlying provider doesn't support function calling, the request will return an error. Configure a fallback provider that supports tool use.
Best practices
- Write clear descriptions: Function descriptions directly affect the model's decision to call them
- Use
requiredfields: Specify which parameters are required to avoid incomplete calls - Validate arguments: Always validate the parsed arguments before executing functions
- Handle missing calls: The model may respond with text instead of tool calls even when tools are available
- Limit tool count: Too many tools can confuse the model; keep the list focused
- Use
enumfor constrained values: Reduces errors in function arguments - Set timeouts for function execution: Don't let tool execution block indefinitely
Next steps
- Streaming: Stream function call responses
- Request types: Different AI operation types
- Error handling: Handle function calling errors