What is the Console?
The ModelRiver Console is a web-based dashboard where you manage every aspect of your AI infrastructure. Instead of juggling provider dashboards, separate monitoring tools, and manual configuration files, the Console brings providers, workflows, observability, and security into a single interface.
Every action you take in the Console: connecting a provider, creating a workflow, revoking an API key: takes effect immediately. There is no deployment step. Changes propagate to your live applications in real time.
Console overview
When you sign in, the Console opens to your active organization. From there, you can select a project to work on. Once inside a project, the sidebar provides quick access to every section:
| Section | What you do there |
|---|---|
| Providers | Connect, configure, and rotate credentials for OpenAI, Anthropic, Google, Cohere, Mistral, and custom endpoints |
| Workflows | Build request pipelines with primary/fallback providers, structured outputs, cache fields, and event hooks |
| Playground | Test prompts and workflows interactively before they reach production |
| Request Logs | Inspect every API request: provider attempts, token usage, latency, webhook deliveries, and error details |
| Webhooks | Register endpoints, verify delivery signatures, and monitor delivery health |
| Settings | Manage API keys, team members, project metadata, and environment configuration |
Tip: You can switch between projects from the Console home page. Each project maintains its own providers, workflows, keys, and logs.
Projects
Projects are the top-level containers in ModelRiver. Each project isolates its own set of providers, workflows, API keys, and request logs. This separation lets you run independent environments for development, staging, and production without interference.
Creating a project
- Click the project selector in the top navigation.
- Choose Create Project.
- Give it a descriptive name (e.g.
production,staging,ml-experiments). - Invite teammates who need access.
When to create separate projects
- Environment isolation: Keep development traffic out of production logs and billing.
- Team boundaries: Give each team its own project with separate provider credentials and API keys.
- Client separation: If you serve multiple customers, isolated projects prevent cross-contamination of credentials and data.
Providers
The Providers section is where you connect your AI model vendors. ModelRiver supports OpenAI, Anthropic, Google (Gemini), Cohere, Mistral, and custom-compatible endpoints out of the box.
Connecting a provider
- Navigate to Settings → Providers in your project.
- Click the provider you want to connect.
- Paste your API key or access token.
- Click Save. The credential is encrypted at rest immediately.
Provider management
- Rotate credentials: Update a provider's API key at any time. The new key takes effect immediately; no restart or redeployment is needed.
- Remove a provider: Disconnecting a provider invalidates it for all workflows that reference it. Ensure fallback providers are configured before removing.
- Credential security: Provider keys are encrypted at rest and masked in the UI. Only the last four characters are visible after saving. See Provider credentials for details.
Workflows
Workflows are the core building blocks of ModelRiver. Each workflow defines a complete request pipeline: which provider to use, what model to call, how to handle failures, and what schema the response should follow.
Building a workflow
- Open Workflows and click Create Workflow.
- Name the workflow (e.g.
customer-support,content-generation). - Select a primary provider and model.
- Optionally add up to two fallback providers for automatic failover.
- Attach a structured output schema if you need guaranteed JSON shapes.
- Define cache fields to surface business identifiers in responses and logs.
- Save. The workflow is live immediately.
Key workflow capabilities
- Provider routing: Automatic failover to backup providers when the primary fails or times out.
- Structured outputs: Attach JSON schemas to enforce response shapes. ModelRiver validates responses against your schema.
- Cache fields: Surface business data (user IDs, session IDs, experiment tags) in API responses under
customer_data. - Event hooks: Attach event names to trigger webhook callbacks for asynchronous processing.
- Test mode: Validate integrations with predictable sample data, zero provider calls, and no quota consumption.
→ Build a workflow: Step-by-step guide
Playground
The Playground is an interactive environment for testing prompts and workflows before they reach production. You can send requests, inspect responses, and iterate on prompt engineering: all without writing code.
Using the Playground
- Select a workflow from the dropdown.
- Compose your messages (system prompt, user message, etc.).
- Click Run.
- Inspect the response, token usage, and latency in real time.
Playground modes
- Production mode: Sends real requests to your configured providers. Use this to validate workflow behavior with live models.
- Test mode: Returns sample data from your structured output schema without calling providers. Use this to validate integration logic without consuming credits.
Tip: Playground requests appear in Request Logs with a
Playgroundfilter, so you can review them separately from production traffic.
Request Logs
Request Logs capture the complete lifecycle of every AI request made through your API. Each log entry records provider attempts, token usage, latency, response payloads, webhook deliveries, and backend callbacks.
What Request Logs show
| Data point | Why it matters |
|---|---|
| Provider & model | Track which vendors handle your requests and compare performance |
| Token usage | Monitor input/output tokens for cost calculation and optimization |
| Duration | Identify slow requests and compare provider latency |
| Status | Instantly spot failures with color-coded success/error indicators |
| Failover chain | See every provider attempt, including fallbacks, in chronological order |
| Webhook deliveries | Verify that async notifications reached your endpoints |
Filtering logs
Use filters to focus on specific request types:
- Live mode: Production API requests from your applications
- Test mode: Workflow test requests with sample data
- Playground: Requests made from the Console Playground
- All requests: Unfiltered view of everything
Timeline view
Click any log entry to open the Timeline, a visual representation of the complete request lifecycle:
- Failover attempts: Failed provider attempts before the successful request
- Main request: The final provider response returned to your application
- Webhook deliveries: Async notifications sent to your endpoints
- Backend callbacks: Responses from your backend for event-driven workflows
→ Observability deep dive: Full documentation on Request Logs, timeline components, and best practices
Webhooks
The Webhooks section lets you register HTTP endpoints that receive real-time notifications when AI requests complete. This is essential for asynchronous workflows and event-driven architectures.
Managing webhooks
- Navigate to Webhooks in your project.
- Add a webhook URL (e.g.
https://api.yourapp.com/webhooks/modelriver). - ModelRiver sends a signed payload to your endpoint for every matching event.
Webhook reliability
- Automatic retries: Failed deliveries are retried up to 3 times within a 5-minute window.
- Signature verification: Every payload includes an HMAC-SHA256 signature for authenticity. See Signature verification.
- Delivery tracking: Monitor delivery status (Planned → Delivering → Success/Error) in Request Logs.
→ Webhooks documentation: Full setup guide, event types, and retry rules
Settings
The Settings section controls project-wide configuration, API key management, and team access.
API keys
- Create keys: Generate API keys with configurable expiration (1 day to never). Keys use the
mr_live_prefix for easy identification. - Revoke keys: Invalidation is immediate. In-flight requests using a revoked key fail with
401 Unauthorized. - Best practice: Create one key per integration for granular revocation. Rotate regularly.
→ API key management: Full documentation on key creation, expiration, and rotation
Team management
- Invite teammates: Add team members from the user menu. Each member gets full access to the project's providers, workflows, and logs.
- Access control: Restrict sensitive operations to trusted team members.
Project configuration
- Project name: Update your project's display name at any time.
- Environment variables: Store configuration values that your workflows and integrations can reference.
Keyboard shortcuts
The Console supports keyboard shortcuts for faster navigation:
| Shortcut | Action |
|---|---|
⌘ K / Ctrl K | Open search |
⌘ / / Ctrl / | Open keyboard shortcut reference |
Security in the Console
The Console is built with security at every layer:
- Session-based authentication: Secure cookies protect dashboard access. Sessions expire after inactivity.
- Encrypted credentials: Provider API keys and secrets are encrypted at rest and masked in the UI.
- Hashed API keys: Your ModelRiver API keys are stored as SHA-256 hashes. The plaintext is shown once at creation and never stored.
- Signed webhooks: All webhook payloads include HMAC-SHA256 signatures for authenticity verification.
- Audit trail: Every request is logged with full metadata for compliance and debugging.
→ Security documentation: Enterprise-grade security controls, compliance, and data retention
Next steps
- Getting started: Create your first project, connect providers, and make your first API call
- Explore Solutions: Build with model failover, real-time streaming, and type-safe outputs.
- Workflows: Design multi-provider pipelines with fallbacks and structured outputs
- API reference: Endpoints, authentication, streaming, and function calling
- Observability: Deep dive into Request Logs, timelines, and debugging
- Webhooks: Set up real-time event notifications
- Client SDK: Framework-specific libraries for React, Vue, Angular, and Svelte
- Security: Credential management, data retention, and compliance