Overview
Request Logs can contain thousands of entries across production traffic, playground tests, and CI/CD runs. Without effective filtering, you'll waste time scrolling through irrelevant data. Mastering the filter system makes debugging 10x faster.
Filter modes explained
When to use each filter
| Filter | When to use | What it shows |
|---|---|---|
| All requests | Broad investigation, searching for a specific request | Everything |
| Live mode | Debugging production issues, monitoring health | Real API calls only |
| Test mode | Reviewing CI/CD results, integration tests | API calls with test flag |
| Playground (Production) | Validating workflow changes | Console tests with real providers |
| Playground (Test mode) | Testing workflow structure | Console tests with sample data |
| All Playground | Reviewing all console testing | Both playground types |
Decision tree
Are you debugging a production issue? → Use "Live mode" Are you reviewing test results? → From CI/CD or API tests: Use "Test mode" → From console: Use "All Playground" Are you investigating a specific request? → Use "All requests" and search Are you monitoring overall health? → Start with "Live mode"Filtering strategies
Strategy 1: Funnel approach
Start broad, then narrow down:
- All requests — Get the big picture
- Live mode — Focus on production
- Error status — Focus on failures
- Time range — Focus on when the issue occurred
- Click to inspect — Drill into specific requests
Strategy 2: Environment isolation
Match your filter to your current task:
Deploying a workflow change: 1. Test in Playground (Test mode) → filter "Playground (Test mode)" 2. Validate in Playground (Production) → filter "Playground (Production)" 3. Deploy and monitor → filter "Live mode" 4. Each phase uses a different filter to avoid cross-contaminationStrategy 3: Comparative analysis
Use filters to compare behavior across environments:
- Run the same prompt in Playground (Production)
- Find the same prompt in Live mode
- Compare request/response payloads between the two
- Differences may explain production-only issues
Identifying requests by seed_batch
Each request type has a seed_batch prefix that helps categorize it:
| Prefix | Source | Filter mode |
|---|---|---|
live: | Real API calls | Live mode |
test: | API calls with test flag | Test mode |
pg: | Console playground (production) | Playground (Production) |
pg_test_mode: | Console playground (test mode) | Playground (Test mode) |
callback: | Backend callbacks | (included in parent request's timeline) |
pg_callback: | Playground callbacks | (included in parent request's timeline) |
Tips for efficient navigation
- Use the refresh button after making API calls to see the latest logs immediately
- Scan status badges first — Green means OK, red needs attention, amber means failover was needed
- Check failed models count — A badge like "2 failed" tells you before you even click
- Look at duration outliers — Requests much slower or faster than average deserve attention
- Use pagination wisely — Don't load all pages; focus on the relevant time window first
Next steps
- Monitoring Failed Models — Track provider stability
- Separating Environments — Keep data clean
- Back to Best Practices — Return to the overview