Prefer a hands-on demo? Skip the manual setup and launch this quickstart in Google Colab. Run in Google Colab →
Steps
1
Setup accounts
Create a Scorecard account, then get your tracing credentials:
- Visit your Settings
- Copy your Scorecard API Key
- Set your environment variables:
2
Install OpenLLMetry SDK
Install OpenLLMetry and your LLM library:
This example uses OpenAI. Scorecard works with all major providers (OpenAI, Anthropic/Claude, Google Gemini, Groq, AWS Bedrock, and more).
3
Initialize OpenLLMetry
Set up OpenLLMetry to automatically trace your LLM calls:
4
Create and run a simple traced workflow
Create a simple workflow that will be automatically traced. Here’s a minimal example:
5
View traces in Scorecard
After running your application, view the traces in your Scorecard account:

- Visit app.scorecard.io
- Navigate to your project → Traces section
- Explore your traced workflows
What You’ll See
- Workflow spans: High-level operations (
simple_chat
) - LLM spans: Automatic OpenAI API call instrumentation
- Timing data: Duration of each operation
- Token usage: Input/output tokens for LLM calls
- Model information: Which models were used
- Comprehensive data: All trace information visible in your Scorecard account


Viewing traces in the Scorecard UI.
Key Benefits
- Zero-code instrumentation: LLM calls are automatically traced
- Structured observability: Organize traces with workflows and tasks
- Performance monitoring: Track latency, token usage, and costs
- User feedback integration: Connect user satisfaction to specific traces
- Production debugging: Understand exactly what happened in failed requests