Complete audit trails for every AI interaction

Maintain comprehensive records of all AI requests, responses, and provider interactions to meet regulatory compliance requirements and support internal audits.

Overview

Many organizations are subject to regulatory requirements that mandate recording and retaining records of AI interactions. Even without regulatory obligations, maintaining audit trails is a best practice for accountability, incident investigation, and governance. ModelRiver's Request Logs provide a comprehensive, tamper-evident record of every AI request made through your API.


What Request Logs capture for compliance

Per-request audit data

Every request log includes:

Data pointDescriptionCompliance value
TimestampWhen the request was madeTimeline reconstruction
Provider & modelWhich AI service processed the requestVendor accountability
Request bodyComplete input sent to the AI providerInput verification
Response bodyComplete output from the AI providerOutput verification
Token usageInput and output token countsUsage auditing
Estimated costPer-request cost estimateFinancial auditing
StatusSuccess or failureError tracking
DurationProcessing timeSLA compliance
Seed batchRequest source (production, test, playground)Environment separation
Channel IDAsync request lifecycle trackerRequest correlation

Failover audit data

When requests fail over to backup providers:

Data pointDescriptionCompliance value
Failed provider/modelWhich provider failedVendor incident tracking
Failure reasonWhy the provider failedRoot cause documentation
Primary request IDLink to the successful requestComplete request chain
All attempt payloadsFull request/response for each attemptComplete interaction record

Webhook and callback audit data

For async and event-driven workflows:

Data pointDescriptionCompliance value
Webhook URLWhere notifications were sentNotification record
Webhook payloadWhat was sent to your backendData flow documentation
Delivery statusSuccess/failure of notificationDelivery confirmation
Callback payloadYour backend's response dataProcessing verification
Callback statusWhether callback was receivedWorkflow completion record

Common compliance requirements

Data retention

Requirement: Maintain records of AI interactions for a specified period.

How ModelRiver helps: Request Logs are stored with timestamps and can be filtered by date. Your data retention policy determines how long logs should be kept.

Considerations:

  • Determine your required retention period based on applicable regulations
  • Plan for data export if logs need to be archived beyond ModelRiver's retention window
  • Consider that request/response bodies may contain sensitive data

Input/output auditing

Requirement: Ability to review what was sent to and received from AI systems.

How ModelRiver helps: Every request log includes the complete request body (what was sent) and response body (what was received), viewable in both raw JSON and interactive tree formats.

Considerations:

  • Request bodies contain the exact prompts and user data sent to AI providers
  • Response bodies contain the complete AI-generated output
  • Use these for post-hoc review of AI interactions

Provider accountability

Requirement: Track which AI providers processed which requests.

How ModelRiver helps: Each log entry records the provider and model used. Failover attempts show distinct provider/model combinations, creating a complete record of vendor involvement.

Considerations:

  • Multi-provider failover means a single logical request may involve multiple vendors
  • Each vendor interaction is individually logged
  • Use this data for vendor risk assessments and due diligence

Access control documentation

Requirement: Document who accessed AI services and how.

How ModelRiver helps:

  • Requests are scoped to projects with project-specific API keys
  • The seed batch field distinguishes between production API calls, test mode, and playground usage
  • Combined with your internal API key management, you can trace requests to specific applications or teams

Audit trail best practices

Separate environments

  • Use test mode for development and testing
  • Use playground for validation and experimentation
  • Keep live mode for production traffic
  • Filter by environment when conducting audits to focus on relevant requests

Regular audits

  • Schedule periodic reviews of request logs
  • Focus on:
    • Unusual patterns (sudden spikes in requests or errors)
    • High-cost requests that may indicate misuse
    • Requests with unexpected providers (indicating failover issues)
    • Requests outside normal operating hours

Document your procedures

  • Maintain written procedures for:
    • How to access and interpret Request Logs
    • How to export log data for external review
    • How to identify and escalate anomalies
    • Retention and archival policies

Preserve evidence

  • When investigating incidents, capture relevant request logs promptly
  • Use the copy functionality to preserve request and response payloads
  • Document the timeline of events using the chronological log view

Regulatory considerations

Note: This section provides general guidance. Consult your legal and compliance teams for requirements specific to your jurisdiction and industry.

Financial services

  • AI-generated financial advice or trading decisions may require comprehensive audit trails
  • Token usage and cost data support financial transaction records

Healthcare

  • AI interactions involving patient data must comply with data protection regulations
  • Request and response payloads may contain protected health information

EU AI Act

  • High-risk AI systems require detailed logging of AI decision-making
  • Request Logs provide the foundation for the required documentation

SOC 2 / ISO 27001

  • Audit trail requirements for information security management
  • Request Logs demonstrate monitoring and logging practices

Next steps