Learn how to enable and setup OpenTelemetry for Gemini CLI.
Built on OpenTelemetry — the vendor-neutral, industry-standard observability framework — Gemini CLI’s observability system provides:
All telemetry behavior is controlled through your .gemini/settings.json file. These settings can be overridden by environment variables or CLI flags.
| Setting | Environment Variable | CLI Flag | Description | Values | Default |
|---|---|---|---|---|---|
enabled | GEMINI_TELEMETRY_ENABLED | --telemetry / --no-telemetry | Enable or disable telemetry | true/false | false |
target | GEMINI_TELEMETRY_TARGET | --telemetry-target <local\|gcp> | Where to send telemetry data | "gcp"/"local" | "local" |
otlpEndpoint | GEMINI_TELEMETRY_OTLP_ENDPOINT | --telemetry-otlp-endpoint <URL> | OTLP collector endpoint | URL string | http://localhost:4317 |
otlpProtocol | GEMINI_TELEMETRY_OTLP_PROTOCOL | --telemetry-otlp-protocol <grpc\|http> | OTLP transport protocol | "grpc"/"http" | "grpc" |
outfile | GEMINI_TELEMETRY_OUTFILE | --telemetry-outfile <path> | Save telemetry to file (overrides otlpEndpoint) | file path | - |
logPrompts | GEMINI_TELEMETRY_LOG_PROMPTS | --telemetry-log-prompts / --no-telemetry-log-prompts | Include prompts in telemetry logs | true/false | true |
useCollector | GEMINI_TELEMETRY_USE_COLLECTOR | - | Use external OTLP collector (advanced) | true/false | false |
Note on boolean environment variables: For the boolean settings (enabled, logPrompts, useCollector), setting the corresponding environment variable to true or 1 will enable the feature. Any other value will disable it.
For detailed information about all configuration options, see the Configuration Guide.
Before using either method below, complete these steps:
export OTLP_GOOGLE_CLOUD_PROJECT="your-telemetry-project-id" export GOOGLE_CLOUD_PROJECT="your-project-id" gcloud auth application-default login export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-account.json" gcloud services enable \ cloudtrace.googleapis.com \ monitoring.googleapis.com \ logging.googleapis.com \ --project="$OTLP_GOOGLE_CLOUD_PROJECT" Sends telemetry directly to Google Cloud services. No collector needed.
.gemini/settings.json: { "telemetry": { "enabled": true, "target": "gcp" } } For custom processing, filtering, or routing, use an OpenTelemetry collector to forward data to Google Cloud.
.gemini/settings.json: { "telemetry": { "enabled": true, "target": "gcp", "useCollector": true } } npm run telemetry -- --target=gcp This will:
~/.gemini/tmp/<projectHash>/otel/collector-gcp.logCtrl+C)~/.gemini/tmp/<projectHash>/otel/collector-gcp.log to view local collector logs.For local development and debugging, you can capture telemetry data locally:
.gemini/settings.json: { "telemetry": { "enabled": true, "target": "local", "otlpEndpoint": "", "outfile": ".gemini/telemetry.log" } } .gemini/telemetry.log).npm run telemetry -- --target=local This will:
~/.gemini/tmp/<projectHash>/otel/collector.logCtrl+C)The following section describes the structure of logs and metrics generated for Gemini CLI.
sessionId is included as a common attribute on all logs and metrics.Logs are timestamped records of specific events. The following events are logged for Gemini CLI:
gemini_cli.config: This event occurs once at startup with the CLI’s configuration. model (string)embedding_model (string)sandbox_enabled (boolean)core_tools_enabled (string)approval_mode (string)api_key_enabled (boolean)vertex_ai_enabled (boolean)code_assist_enabled (boolean)log_prompts_enabled (boolean)file_filtering_respect_git_ignore (boolean)debug_mode (boolean)mcp_servers (string)output_format (string: “text” or “json”)gemini_cli.user_prompt: This event occurs when a user submits a prompt. prompt_length (int)prompt_id (string)prompt (string, this attribute is excluded if log_prompts_enabled is configured to be false)auth_type (string)gemini_cli.tool_call: This event occurs for each function call. function_namefunction_argsduration_mssuccess (boolean)decision (string: “accept”, “reject”, “auto_accept”, or “modify”, if applicable)error (if applicable)error_type (if applicable)content_length (int, if applicable)metadata (if applicable, dictionary of string -> any)gemini_cli.file_operation: This event occurs for each file operation. tool_name (string)operation (string: “create”, “read”, “update”)lines (int, if applicable)mimetype (string, if applicable)extension (string, if applicable)programming_language (string, if applicable)diff_stat (json string, if applicable): A JSON string with the following members: ai_added_lines (int)ai_removed_lines (int)user_added_lines (int)user_removed_lines (int)gemini_cli.api_request: This event occurs when making a request to Gemini API. modelrequest_text (if applicable)gemini_cli.api_error: This event occurs if the API request fails. modelerrorerror_typestatus_codeduration_msauth_typegemini_cli.api_response: This event occurs upon receiving a response from Gemini API. modelstatus_codeduration_mserror (optional)input_token_countoutput_token_countcached_content_token_countthoughts_token_counttool_token_countresponse_text (if applicable)auth_typegemini_cli.tool_output_truncated: This event occurs when the output of a tool call is too large and gets truncated. tool_name (string)original_content_length (int)truncated_content_length (int)threshold (int)lines (int)prompt_id (string)gemini_cli.malformed_json_response: This event occurs when a generateJson response from Gemini API cannot be parsed as a json. modelgemini_cli.flash_fallback: This event occurs when Gemini CLI switches to flash as fallback. auth_typegemini_cli.slash_command: This event occurs when a user executes a slash command. command (string)subcommand (string, if applicable)gemini_cli.extension_enable: This event occurs when an extension is enabledgemini_cli.extension_install: This event occurs when an extension is installed extension_name (string)extension_version (string)extension_source (string)status (string)gemini_cli.extension_uninstall: This event occurs when an extension is uninstalledMetrics are numerical measurements of behavior over time.
gemini_cli.session.count (Counter, Int): Incremented once per CLI startup.
gemini_cli.tool.call.count (Counter, Int): Counts tool calls. function_namesuccess (boolean)decision (string: “accept”, “reject”, or “modify”, if applicable)tool_type (string: “mcp”, or “native”, if applicable)gemini_cli.tool.call.latency (Histogram, ms): Measures tool call latency. function_namedecision (string: “accept”, “reject”, or “modify”, if applicable)gemini_cli.api.request.count (Counter, Int): Counts all API requests. modelstatus_codeerror_type (if applicable)gemini_cli.api.request.latency (Histogram, ms): Measures API request latency. modelgen_ai.client.operation.duration below that’s compliant with GenAI Semantic Conventions.gemini_cli.token.usage (Counter, Int): Counts the number of tokens used. modeltype (string: “input”, “output”, “thought”, “cache”, or “tool”)gen_ai.client.token.usage below for input/output token types that’s compliant with GenAI Semantic Conventions.gemini_cli.file.operation.count (Counter, Int): Counts file operations. operation (string: “create”, “read”, “update”): The type of file operation.lines (Int, if applicable): Number of lines in the file.mimetype (string, if applicable): Mimetype of the file.extension (string, if applicable): File extension of the file.model_added_lines (Int, if applicable): Number of lines added/changed by the model.model_removed_lines (Int, if applicable): Number of lines removed/changed by the model.user_added_lines (Int, if applicable): Number of lines added/changed by user in AI proposed changes.user_removed_lines (Int, if applicable): Number of lines removed/changed by user in AI proposed changes.programming_language (string, if applicable): The programming language of the file.gemini_cli.chat_compression (Counter, Int): Counts chat compression operations tokens_before: (Int): Number of tokens in context prior to compressiontokens_after: (Int): Number of tokens in context after compressionThe following metrics comply with OpenTelemetry GenAI semantic conventions for standardized observability across GenAI applications:
gen_ai.client.token.usage (Histogram, token): Number of input and output tokens used per operation. gen_ai.operation.name (string): The operation type (e.g., “generate_content”, “chat”)gen_ai.provider.name (string): The GenAI provider (“gcp.gen_ai” or “gcp.vertex_ai”)gen_ai.token.type (string): The token type (“input” or “output”)gen_ai.request.model (string, optional): The model name used for the requestgen_ai.response.model (string, optional): The model name that generated the responseserver.address (string, optional): GenAI server addressserver.port (int, optional): GenAI server portgen_ai.client.operation.duration (Histogram, s): GenAI operation duration in seconds. gen_ai.operation.name (string): The operation type (e.g., “generate_content”, “chat”)gen_ai.provider.name (string): The GenAI provider (“gcp.gen_ai” or “gcp.vertex_ai”)gen_ai.request.model (string, optional): The model name used for the requestgen_ai.response.model (string, optional): The model name that generated the responseserver.address (string, optional): GenAI server addressserver.port (int, optional): GenAI server porterror.type (string, optional): Error type if the operation failed