Overview
Datadog is an enterprise observability platform. By forwarding ModelRiver request data to Datadog, you can build dashboards that correlate AI metrics (latency, tokens, provider usage) with your application's existing telemetry.
Approach
ModelRiver provides its own Observability dashboard with Request Logs. To integrate with Datadog, you instrument your application code to emit custom metrics and traces around ModelRiver API calls.
Custom metrics via DogStatsD
Python
PYTHON
1from datadog import statsd2from openai import OpenAI3import time4 5client = OpenAI(6 base_url="https://api.modelriver.com/v1",7 api_key="mr_live_YOUR_API_KEY",8)9 10def chat_with_metrics(workflow: str, messages: list) -> str:11 start = time.time()12 tags = [f"workflow:{workflow}"]13 14 try:15 response = client.chat.completions.create(16 model=workflow,17 messages=messages,18 )19 20 duration = (time.time() - start) * 100021 tokens = response.usage.total_tokens22 23 # Emit metrics24 statsd.histogram("modelriver.request.duration_ms", duration, tags=tags)25 statsd.increment("modelriver.request.count", tags=tags + ["status:success"])26 statsd.histogram("modelriver.request.tokens", tokens, tags=tags)27 28 return response.choices[0].message.content29 30 except Exception as e:31 duration = (time.time() - start) * 100032 statsd.increment("modelriver.request.count", tags=tags + ["status:error"])33 statsd.histogram("modelriver.request.duration_ms", duration, tags=tags)34 raiseNode.js
JAVASCRIPT
1import { StatsD } from "hot-shots";2import OpenAI from "openai";3 4const statsd = new StatsD();5const client = new OpenAI({6 baseURL: "https://api.modelriver.com/v1",7 apiKey: "mr_live_YOUR_API_KEY",8});9 10async function chatWithMetrics(workflow, messages) {11 const start = Date.now();12 const tags = [`workflow:${workflow}`];13 14 try {15 const response = await client.chat.completions.create({16 model: workflow,17 messages,18 });19 20 const duration = Date.now() - start;21 statsd.histogram("modelriver.request.duration_ms", duration, tags);22 statsd.increment("modelriver.request.count", [...tags, "status:success"]);23 statsd.histogram("modelriver.request.tokens", response.usage.total_tokens, tags);24 25 return response.choices[0].message.content;26 } catch (error) {27 statsd.increment("modelriver.request.count", [...tags, "status:error"]);28 throw error;29 }30}APM traces
Python (ddtrace)
PYTHON
1from ddtrace import tracer2from openai import OpenAI3 4client = OpenAI(5 base_url="https://api.modelriver.com/v1",6 api_key="mr_live_YOUR_API_KEY",7)8 9@tracer.wrap(service="modelriver", resource="chat")10def chat(workflow: str, messages: list):11 span = tracer.current_span()12 span.set_tag("modelriver.workflow", workflow)13 14 response = client.chat.completions.create(15 model=workflow,16 messages=messages,17 )18 19 span.set_tag("modelriver.tokens", response.usage.total_tokens)20 span.set_tag("modelriver.model", response.model)21 return response.choices[0].message.contentWebhook-based forwarding
Forward ModelRiver webhook events to Datadog Logs:
PYTHON
1# FastAPI webhook receiver that forwards to Datadog2from fastapi import FastAPI, Request3import httpx4 5app = FastAPI()6 7DATADOG_LOG_URL = "https://http-intake.logs.datadoghq.com/api/v2/logs"8DD_API_KEY = "YOUR_DATADOG_API_KEY"9 10@app.post("/webhooks/modelriver")11async def forward_to_datadog(request: Request):12 body = await request.json()13 14 # Forward to Datadog Logs15 async with httpx.AsyncClient() as http:16 await http.post(17 DATADOG_LOG_URL,18 headers={"DD-API-KEY": DD_API_KEY},19 json=[{20 "ddsource": "modelriver",21 "service": "ai-gateway",22 "hostname": "modelriver",23 "message": body,24 }],25 )26 27 return {"status": "ok"}Suggested dashboard widgets
| Widget | Metric | Purpose |
|---|---|---|
| Timeseries | modelriver.request.duration_ms | Latency trends |
| Top list | modelriver.request.count by workflow | Most-used workflows |
| Heat map | modelriver.request.tokens | Token distribution |
| Counter | modelriver.request.count{status:error} | Error rate |
| Query value | avg:modelriver.request.duration_ms | Average latency |
Best practices
- Tag by workflow: Enables per-workflow dashboards and alerts
- Track error rates: Alert on sudden spikes in
status:error - Correlate with app traces: Nest ModelRiver spans inside your request traces
- Use webhook forwarding for detailed logs: Webhooks contain full request/response metadata
- Set up monitors: Alert when latency exceeds SLAs or error rates spike
Next steps
- Sentry integration: Error tracking alternative
- Observability: ModelRiver's built-in monitoring
- API reference: Endpoint documentation