Documentation

Function calling & tool use

Pass tool definitions to your AI workflows and receive structured function calls. Supports OpenAI-compatible tools format across multiple providers.

ModelRiver supports OpenAI-compatible function calling via tools and tool_choice. Tools are passed through to the underlying provider (OpenAI, xAI, Mistral), and tool calls are returned in standard OpenAI format.

How function calling works

  1. Define tools: Describe the functions your application can execute
  2. Send the request: Include tool definitions alongside your messages
  3. Receive tool_calls: The model returns structured function calls instead of text
  4. Execute functions: Your application runs the requested functions
  5. Return results: Send function results back to continue the conversation

Basic example

Python

PYTHON
1from openai import OpenAI
2 
3client = OpenAI(
4 base_url="https://api.modelriver.com/v1",
5 api_key="mr_live_YOUR_API_KEY"
6)
7 
8response = client.chat.completions.create(
9 model="my_workflow",
10 messages=[{"role": "user", "content": "What's the weather in Paris?"}],
11 tools=[{
12 "type": "function",
13 "function": {
14 "name": "get_weather",
15 "description": "Get the current weather for a location",
16 "parameters": {
17 "type": "object",
18 "properties": {
19 "location": {
20 "type": "string",
21 "description": "City name"
22 },
23 "unit": {
24 "type": "string",
25 "enum": ["celsius", "fahrenheit"],
26 "description": "Temperature unit"
27 }
28 },
29 "required": ["location"]
30 }
31 }
32 }],
33 tool_choice="auto"
34)
35 
36# Check if the model wants to call a function
37message = response.choices[0].message
38if message.tool_calls:
39 for tool_call in message.tool_calls:
40 print(f"Function: {tool_call.function.name}")
41 print(f"Arguments: {tool_call.function.arguments}")
42else:
43 print(message.content)

Node.js

JAVASCRIPT
1import OpenAI from "openai";
2 
3const client = new OpenAI({
4 baseURL: "https://api.modelriver.com/v1",
5 apiKey: "mr_live_YOUR_API_KEY",
6});
7 
8const response = await client.chat.completions.create({
9 model: "my_workflow",
10 messages: [{ role: "user", content: "What's the weather in Paris?" }],
11 tools: [{
12 type: "function",
13 function: {
14 name: "get_weather",
15 description: "Get the current weather for a location",
16 parameters: {
17 type: "object",
18 properties: {
19 location: { type: "string", description: "City name" },
20 unit: { type: "string", enum: ["celsius", "fahrenheit"] },
21 },
22 required: ["location"],
23 },
24 },
25 }],
26 tool_choice: "auto",
27});
28 
29const message = response.choices[0].message;
30if (message.tool_calls) {
31 for (const toolCall of message.tool_calls) {
32 console.log(`Function: ${toolCall.function.name}`);
33 console.log(`Arguments: ${toolCall.function.arguments}`);
34 }
35} else {
36 console.log(message.content);
37}

Response with tool_calls

When the model decides to call a function, the response includes a tool_calls array:

JSON
1{
2 "choices": [{
3 "index": 0,
4 "message": {
5 "role": "assistant",
6 "content": null,
7 "tool_calls": [{
8 "id": "call_abc123",
9 "type": "function",
10 "function": {
11 "name": "get_weather",
12 "arguments": "{\"location\": \"Paris\", \"unit\": \"celsius\"}"
13 }
14 }]
15 },
16 "finish_reason": "tool_calls"
17 }]
18}

Response fields

FieldDescription
tool_calls[].idUnique identifier for this tool call
tool_calls[].typeAlways "function"
tool_calls[].function.nameName of the function to execute
tool_calls[].function.argumentsJSON string of function arguments
finish_reasonSet to "tool_calls" when the model wants to call functions
contentTypically null when tool_calls are present

Multi-turn tool conversations

After executing the requested function, send the result back to continue the conversation:

PYTHON
1import json
2 
3# Step 1: Initial request with tools
4messages = [{"role": "user", "content": "What's the weather in Paris?"}]
5 
6response = client.chat.completions.create(
7 model="my_workflow",
8 messages=messages,
9 tools=tools
10)
11 
12message = response.choices[0].message
13 
14# Step 2: Execute the function
15if message.tool_calls:
16 # Add the assistant's message (with tool_calls)
17 messages.append(message)
18
19 for tool_call in message.tool_calls:
20 # Execute your function
21 if tool_call.function.name == "get_weather":
22 args = json.loads(tool_call.function.arguments)
23 result = get_weather(args["location"]) # Your function
24
25 # Add the function result
26 messages.append({
27 "role": "tool",
28 "tool_call_id": tool_call.id,
29 "content": json.dumps(result)
30 })
31
32 # Step 3: Get the final response
33 final_response = client.chat.completions.create(
34 model="my_workflow",
35 messages=messages,
36 tools=tools
37 )
38
39 print(final_response.choices[0].message.content)
40 # Output: "The weather in Paris is currently 18°C and partly cloudy."

Tool choice options

Control how the model uses tools with the tool_choice parameter:

ValueBehaviour
"auto"Model decides whether to call a function (default)
"none"Model will not call any functions
"required"Model must call at least one function
{"type": "function", "function": {"name": "get_weather"}}Force a specific function
PYTHON
1# Force the model to call a specific function
2response = client.chat.completions.create(
3 model="my_workflow",
4 messages=messages,
5 tools=tools,
6 tool_choice={
7 "type": "function",
8 "function": {"name": "get_weather"}
9 }
10)

Multiple tools

Define multiple tools and let the model choose which to call:

PYTHON
1tools = [
2 {
3 "type": "function",
4 "function": {
5 "name": "get_weather",
6 "description": "Get weather for a location",
7 "parameters": {
8 "type": "object",
9 "properties": {
10 "location": {"type": "string"}
11 },
12 "required": ["location"]
13 }
14 }
15 },
16 {
17 "type": "function",
18 "function": {
19 "name": "search_restaurants",
20 "description": "Search for restaurants near a location",
21 "parameters": {
22 "type": "object",
23 "properties": {
24 "location": {"type": "string"},
25 "cuisine": {"type": "string"},
26 "max_results": {"type": "integer", "default": 5}
27 },
28 "required": ["location"]
29 }
30 }
31 },
32 {
33 "type": "function",
34 "function": {
35 "name": "book_reservation",
36 "description": "Book a table at a restaurant",
37 "parameters": {
38 "type": "object",
39 "properties": {
40 "restaurant_id": {"type": "string"},
41 "date": {"type": "string", "format": "date"},
42 "party_size": {"type": "integer"},
43 "time": {"type": "string"}
44 },
45 "required": ["restaurant_id", "date", "party_size"]
46 }
47 }
48 }
49]

The model may call multiple tools in a single response. Handle each tool call individually and return all results.


Provider support

Function calling support varies by provider and model:

ProviderSupported modelsNotes
OpenAIGPT-4, GPT-4o, GPT-3.5-turboFull support
xAIGrokFull support
MistralMistral Large, MixtralFull support
AnthropicClaude 3+Uses different format; not yet supported via this adapter
GoogleGeminiUses different format; not yet supported via this adapter

Note: If your workflow's underlying provider doesn't support function calling, the request will return an error. Configure a fallback provider that supports tool use.


Best practices

  1. Write clear descriptions: Function descriptions directly affect the model's decision to call them
  2. Use required fields: Specify which parameters are required to avoid incomplete calls
  3. Validate arguments: Always validate the parsed arguments before executing functions
  4. Handle missing calls: The model may respond with text instead of tool calls even when tools are available
  5. Limit tool count: Too many tools can confuse the model; keep the list focused
  6. Use enum for constrained values: Reduces errors in function arguments
  7. Set timeouts for function execution: Don't let tool execution block indefinitely

Next steps