Documentation

Observability Tool Integrations

Forward AI metrics, traces, and error events to your existing monitoring stack. Correlate ModelRiver data with your application telemetry in Datadog or Sentry.

Overview

While ModelRiver provides its own built-in observability with Request Logs, you may want to correlate AI metrics with your existing application monitoring. These integrations let you forward ModelRiver request data to your observability platform of choice.

Why forward ModelRiver data to external tools?

  • Unified dashboards: AI metrics alongside application metrics
  • Custom alerting: Trigger alerts based on AI latency, error rates, or token costs
  • Distributed tracing: Nest AI request spans inside your application traces
  • Long-term retention: Store AI telemetry with your existing data retention policies

Supported tools

ToolTypeHighlightsDifficultyGuide
DatadogMetrics + APMDogStatsD, traces, webhook forwarding⭐⭐ MediumView guide →
SentryError trackingException capture, spans, breadcrumbs⭐⭐ MediumView guide →

Datadog

Forward AI request metrics, latency data, and token usage to Datadog dashboards. Set up custom monitors to alert on latency spikes or error rate increases.

PYTHON
1from datadog import statsd
2 
3statsd.histogram("modelriver.request.duration_ms", duration, tags=[f"workflow:{workflow}"])
4statsd.increment("modelriver.request.count", tags=[f"workflow:{workflow}", "status:success"])

Full Datadog guide →


Sentry

Capture AI errors, monitor request performance with custom spans, and leave breadcrumb trails for debugging.

PYTHON
1import sentry_sdk
2 
3with sentry_sdk.start_span(op="ai.chat", description=f"ModelRiver: {workflow}") as span:
4 response = client.chat.completions.create(model=workflow, messages=messages)
5 span.set_data("tokens", response.usage.total_tokens)

Full Sentry guide →


ModelRiver's built-in observability

Don't forget: ModelRiver includes comprehensive observability out of the box:

  • Request Logs: Every request, with full request/response payloads
  • Timeline view: Provider failover sequences, webhook deliveries
  • Cost analytics: Token-level cost tracking per workflow
  • Performance monitoring: Latency percentiles and trends

Built-in observability guide →


Next steps