Headless mode allows you to run Gemini CLI programmatically from command line scripts and automation tools without any interactive UI. This is ideal for scripting, automation, CI/CD pipelines, and building AI-powered tools.
The headless mode provides a headless interface to Gemini CLI that:
Use the --prompt (or -p) flag to run in headless mode:
gemini --prompt "What is machine learning?" Pipe input to Gemini CLI from your terminal:
echo "Explain this code" | gemini Read from files and process with Gemini:
cat README.md | gemini --prompt "Summarize this documentation" Standard human-readable output:
gemini -p "What is the capital of France?" Response format:
The capital of France is Paris. Returns structured data including response, statistics, and metadata. This format is ideal for programmatic processing and automation scripts.
The JSON output follows this high-level structure:
{ "response": "string", // The main AI-generated content answering your prompt "stats": { // Usage metrics and performance data "models": { // Per-model API and token usage statistics "[model-name]": { "api": { /* request counts, errors, latency */ }, "tokens": { /* prompt, response, cached, total counts */ } } }, "tools": { // Tool execution statistics "totalCalls": "number", "totalSuccess": "number", "totalFail": "number", "totalDurationMs": "number", "totalDecisions": { /* accept, reject, modify, auto_accept counts */ }, "byName": { /* per-tool detailed stats */ } }, "files": { // File modification statistics "totalLinesAdded": "number", "totalLinesRemoved": "number" } }, "error": { // Present only when an error occurred "type": "string", // Error type (e.g., "ApiError", "AuthError") "message": "string", // Human-readable error description "code": "number" // Optional error code } } gemini -p "What is the capital of France?" --output-format json Response:
{ "response": "The capital of France is Paris.", "stats": { "models": { "gemini-2.5-pro": { "api": { "totalRequests": 2, "totalErrors": 0, "totalLatencyMs": 5053 }, "tokens": { "prompt": 24939, "candidates": 20, "total": 25113, "cached": 21263, "thoughts": 154, "tool": 0 } }, "gemini-2.5-flash": { "api": { "totalRequests": 1, "totalErrors": 0, "totalLatencyMs": 1879 }, "tokens": { "prompt": 8965, "candidates": 10, "total": 9033, "cached": 0, "thoughts": 30, "tool": 28 } } }, "tools": { "totalCalls": 1, "totalSuccess": 1, "totalFail": 0, "totalDurationMs": 1881, "totalDecisions": { "accept": 0, "reject": 0, "modify": 0, "auto_accept": 1 }, "byName": { "google_web_search": { "count": 1, "success": 1, "fail": 0, "durationMs": 1881, "decisions": { "accept": 0, "reject": 0, "modify": 0, "auto_accept": 1 } } } }, "files": { "totalLinesAdded": 0, "totalLinesRemoved": 0 } } } Save output to files or pipe to other commands:
# Save to file gemini -p "Explain Docker" > docker-explanation.txt gemini -p "Explain Docker" --output-format json > docker-explanation.json # Append to file gemini -p "Add more details" >> docker-explanation.txt # Pipe to other tools gemini -p "What is Kubernetes?" --output-format json | jq '.response' gemini -p "Explain microservices" | wc -w gemini -p "List programming languages" | grep -i "python" Key command-line options for headless usage:
| Option | Description | Example |
|---|---|---|
--prompt, -p | Run in headless mode | gemini -p "query" |
--output-format | Specify output format (text, json) | gemini -p "query" --output-format json |
--model, -m | Specify the Gemini model | gemini -p "query" -m gemini-2.5-flash |
--debug, -d | Enable debug mode | gemini -p "query" --debug |
--all-files, -a | Include all files in context | gemini -p "query" --all-files |
--include-directories | Include additional directories | gemini -p "query" --include-directories src,docs |
--yolo, -y | Auto-approve all actions | gemini -p "query" --yolo |
--approval-mode | Set approval mode | gemini -p "query" --approval-mode auto_edit |
For complete details on all available configuration options, settings files, and environment variables, see the Configuration Guide.
cat src/auth.py | gemini -p "Review this authentication code for security issues" > security-review.txt result=$(git diff --cached | gemini -p "Write a concise commit message for these changes" --output-format json) echo "$result" | jq -r '.response' result=$(cat api/routes.js | gemini -p "Generate OpenAPI spec for these routes" --output-format json) echo "$result" | jq -r '.response' > openapi.json for file in src/*.py; do echo "Analyzing $file..." result=$(cat "$file" | gemini -p "Find potential bugs and suggest improvements" --output-format json) echo "$result" | jq -r '.response' > "reports/$(basename "$file").analysis" echo "Completed analysis for $(basename "$file")" >> reports/progress.log done result=$(git diff origin/main...HEAD | gemini -p "Review these changes for bugs, security issues, and code quality" --output-format json) echo "$result" | jq -r '.response' > pr-review.json grep "ERROR" /var/log/app.log | tail -20 | gemini -p "Analyze these errors and suggest root cause and fixes" > error-analysis.txt result=$(git log --oneline v1.0.0..HEAD | gemini -p "Generate release notes from these commits" --output-format json) response=$(echo "$result" | jq -r '.response') echo "$response" echo "$response" >> CHANGELOG.md result=$(gemini -p "Explain this database schema" --include-directories db --output-format json) total_tokens=$(echo "$result" | jq -r '.stats.models // {} | to_entries | map(.value.tokens.total) | add // 0') models_used=$(echo "$result" | jq -r '.stats.models // {} | keys | join(", ") | if . == "" then "none" else . end') tool_calls=$(echo "$result" | jq -r '.stats.tools.totalCalls // 0') tools_used=$(echo "$result" | jq -r '.stats.tools.byName // {} | keys | join(", ") | if . == "" then "none" else . end') echo "$(date): $total_tokens tokens, $tool_calls tool calls ($tools_used) used with models: $models_used" >> usage.log echo "$result" | jq -r '.response' > schema-docs.md echo "Recent usage trends:" tail -5 usage.log