Skip to main content
Braintrust home page
Search...
⌘K
Ask AI
Request demo
Dashboard
Dashboard
Search...
Navigation
Get started
Get started with Braintrust
Docs
Cookbook
Libraries
Data API
Changelog
Get started
Overview
Evaluate via UI
Evaluate via SDK
Core
Organizations
Projects
Logs
Monitor
Review
Playgrounds
Experiments
Datasets
Functions
Context
Tracing
Loop
Views
Attachments
Environments
Assignments and mentions
Remote evals
Automations
Access control
AI proxy
Self-Hosting
API walkthrough
Integrations
AI providers
SDK integrations
Best practices
Evaluating agents
Writing scorers
PM workflows
Reference
Autoevals
Architecture
Authentication
BTQL
Functions
Glossary
Model Context Protocol (MCP)
Share via URL
Reasoning
Streaming
Pricing FAQ
Security
Get started
Get started with Braintrust
Copy page
Copy page
Braintrust is the AI observability platform helping teams measure, evaluate, and improve AI in production.
Iterative experimentation
Rapidly prototype with different prompts
and models in the
playground
Performance insights
Built-in tools to
evaluate
how models and prompts are performing in production, and dig into specific examples
Real-time monitoring
Log
, monitor, and take action on real-world interactions with robust and flexible monitoring
Data management
Manage
and
review
data to store and version
your test sets centrally
With Braintrust, teams can compare models, iterate on prompts, catch regressions, and leverage real user data to continuously improve AI applications.
Was this page helpful?
Yes
No
Evaluate via UI
Next