Visibility

Analytics that show every request

Track requests, tokens, latency, and errors. Debug issues fast with detailed request logs.

Daily volume Token usage Latency Per-organization

Visual

Analytics funnel

Requests roll up into metrics, traces, and charts you can filter.

01

Requests captured

Every API call logged per key/project

02

Traces generated

Latency, provider hops, timing

03

Metrics aggregated

Tokens, costs, success rates

04

Dashboards & charts

Filter by date, provider, workflow

05

Alerts configured

SLA breaches, error spikes

06

Export & share

CSV downloads, API access, team sharing

Dashboard snapshot
chart:
  - date: "2025-02-10"
    requests: 1842
    tokens: 942_330
top_providers:
  - openai:gpt-4o
  - anthropic:sonnet
latency_p50_ms: 620
fail_rate: 0.7%
              
1

Org-aware

Stats and request logs respect the currently selected organization.

2

Explore requests

Inspect provider, model, status, tokens, and payloads to debug fast.

3

Trend ready

Daily charts for requests and tokens help spot spikes and budget early.

Latency

p50 / p95

Track performance by provider and workflow.

Tokens

Usage

See totals for budgeting and billing.

Failures

Surfaced

Investigate 4xx/5xx with full context.

Scroll the insights

01 · Track

Requests, tokens, latency, failures per org.

02 · Inspect

Open request logs with provider, status, and payloads.

03 · Compare

Spot spikes by day and provider to tune routing.

04 · Decide

Use data to adjust limits, failover, or prompts.

Use cases

  • Monitor org usage across projects.
  • Debug failures with full request context.
  • Budget tokens and spot performance regressions.

What’s unique

  • Org-aware stats aligned with session selection.
  • Request logs tied to providers, models, and workflows.
  • Combines with failover and rate limits for actionable insights.

Rich response metadata

Every response includes detailed analytics data

// Every API response includes analytics metadata:
{
  "data": { ... },
  "customer_data": { /* cached fields from request */ },
  "meta": {
    "status": "success",
    "workflow": "marketing-summary",
    "used_provider": "openai",
    "used_model": "gpt-4o-mini",
    "duration_ms": 1420,
    "usage": {
      "prompt_tokens": 123,
      "completion_tokens": 45
    },
    "attempts": [...]
  }
}

Use customer_data cache fields to surface user IDs, segments, and experiment variants in logs and responses.

See what your AI is doing

Combine analytics with failover, rate limiting, and structured outputs to keep quality high.