An intelligent MCP server that acts as an AI tech lead for coding agents—providing expert validation, impact analysis, and strategic guidance before code changes are made. Like a senior engineer reviewing your approach, Athena Protocol helps AI agents catch critical issues early, validate assumptions against the actual codebase, and optimize their problem-solving strategies. The result: higher quality code, fewer regressions, and more thoughtful architectural decisions.
Key Feature: Precision file analysis with analysisTargets - achieve 70-85% token reduction and 3-4× faster performance with precision-targeted code analysis. See Enhanced File Analysis for details
Imagine LLMs working with context so refined and targeted that they eliminate guesswork, reduce errors by 80%, and deliver code with the precision of seasoned architects—transforming how AI agents understand and enhance complex codebases.
This server handles API keys for multiple LLM providers. Ensure your .env file is properly secured and never committed to version control. The server validates all API keys on startup and provides detailed error messages for configuration issues.
The Athena Protocol MCP Server provides systematic thinking validation for AI coding agents. It supports 14 LLM providers and offers various validation tools including thinking validation, impact analysis, assumption checking, dependency mapping, and thinking optimization.
Key features:
- Smart Client Mode with precision-targeted code analysis (70-85% token reduction)
- Environment-driven configuration with no hardcoded defaults
- Multi-provider LLM support (14 providers) with automatic fallback
- Enhanced file reading with multiple modes (full, head, tail, range)
- Concurrent file operations for 3-4× performance improvement
- Session-based validation history and memory management
- Comprehensive configuration validation and health monitoring
- Dual-agent architecture for efficient validation workflows
This module depends upon a knowledge of Node.js and npm.
npm install npm run build- Node.js >= 18
- npm or yarn
The Athena Protocol uses 100% environment-driven configuration - no hardcoded provider values or defaults. Configure everything through your .env file:
- Copy the example configuration:
cp .env.example .env-
Edit
.envand configure your provider:- Set
DEFAULT_LLM_PROVIDER(e.g.,openai,anthropic,google) - Add your API key for the chosen provider
- Configure model and parameters (optional)
- Set
-
Validate and test:
npm install npm run build npm run validate-config # Validates your .env configuration npm test
See .env.example for complete configuration options and all 14 supported providers.
PROVIDER_SELECTION_PRIORITYis REQUIRED - list your providers in priority order- No hardcoded fallbacks exist - all configuration must be explicit in
.env - Fail-fast validation - invalid configuration causes immediate startup failure
- Complete provider config required - API key, model, and parameters for each provider
The Athena Protocol supports 14 LLM providers. While OpenAI is commonly used, you can configure any of:
Major Cloud Providers:
- OpenAI - GPT-5 (with thinking), GPT-4o, GPT-4-turbo
- Anthropic - Claude Opus 4.1, Claude Sonnet 4.5, Claude Haiku 4.5
- Google - Gemini 2.5 (Flash/Pro/Ultra)
- Azure OpenAI - Enterprise-grade GPT models
- AWS Bedrock - Claude, Llama, and more
- Google Vertex AI - Gemini with enterprise features
Specialized Providers:
- OpenRouter - Access to 400+ models
- Groq - Ultra-fast inference
- Mistral AI - Open-source models
- Perplexity - Search-augmented models
- XAI - Grok models
- Qwen - Alibaba's high-performance LLMs
- ZAI - GLM models
Local/Self-Hosted:
- Ollama - Run models locally
Quick switch example:
# Edit .env file ANTHROPIC_API_KEY=sk-ant-your-key-here DEFAULT_LLM_PROVIDER=anthropic # Restart server npm run build && npm startSee the detailed provider guide for complete setup instructions.
For detailed, tested MCP client configurations, see CLIENT_MCP_CONFIGURATION_EXAMPLES.md
Local installation with .env file remains fully functional and unchanged. Simply clone the repository and run:
npm install npm run buildThen configure your MCP client to point to the local installation:
{ "mcpServers": { "athena-protocol": { "command": "node", "args": ["/absolute/path/to/athena-protocol/dist/index.js"], "type": "stdio", "timeout": 300 } } }For npm/npx usage, configure your MCP client with environment variables. Only the configurations in CLIENT_MCP_CONFIGURATION_EXAMPLES.md are tested and guaranteed to work.
Example for GPT-5:
{ "mcpServers": { "athena-protocol": { "command": "npx", "args": ["@n0zer0d4y/athena-protocol"], "env": { "DEFAULT_LLM_PROVIDER": "openai", "OPENAI_API_KEY": "your-openai-api-key-here", "OPENAI_MODEL_DEFAULT": "gpt-5", "OPENAI_MAX_COMPLETION_TOKENS_DEFAULT": "8192", "OPENAI_VERBOSITY_DEFAULT": "medium", "OPENAI_REASONING_EFFORT_DEFAULT": "high", "LLM_TEMPERATURE_DEFAULT": "0.7", "LLM_MAX_TOKENS_DEFAULT": "2000", "LLM_TIMEOUT_DEFAULT": "30000" }, "type": "stdio", "timeout": 300 } } }See CLIENT_MCP_CONFIGURATION_EXAMPLES.md for complete working configurations.
Configuration Notes:
- NPM Installation: Use
npx @n0zer0d4y/athena-protocolwith theenvfield for easiest setup - Local Installation: Local
.envfile execution remains fully functional and unchanged - Environment Priority: MCP
envvariables take precedence over.envfile variables - GPT-5 Support: Includes specific parameters for GPT-5 models
- Timeout Configuration: The default timeout of 300 seconds (5 minutes) is set for reasoning models like GPT-5. For faster LLMs (GPT-4, Claude, Gemini), you can reduce this to 60-120 seconds
- GPT-5 Parameter Notes: The parameters
LLM_TEMPERATURE_DEFAULT,LLM_MAX_TOKENS_DEFAULT, andLLM_TIMEOUT_DEFAULTare currently required for GPT-5 models but are not used by the model itself. This is a temporary limitation that will be addressed in a future refactoring - Security: Never commit API keys to version control - use MCP client environment variables instead
Current Issue: GPT-5 models currently require the standard LLM parameters (LLM_TEMPERATURE_DEFAULT, LLM_MAX_TOKENS_DEFAULT, LLM_TIMEOUT_DEFAULT) even though these parameters are not used by the model.
Planned Solution:
- Modify
getTemperature()function to returnundefinedfor GPT-5+ models instead of a hardcoded default - Update AI provider interfaces to handle
undefinedtemperature values - Implement conditional parameter validation that skips standard parameters for GPT-5+ models
- Update OpenAI provider to omit unused parameters when communicating with GPT-5 API
Benefits:
- Cleaner configuration for GPT-5 users
- More accurate representation of model capabilities
- Better adherence to OpenAI's GPT-5 API specification
Timeline: Target implementation in v0.3.0
npm start # Start MCP server for client integration (requires .env or MCP env) npm run dev # Development mode with auto-restart npx @n0zer0d4y/athena-protocol # Run published version via npx (requires MCP env)npm run start:standalone # Test server without MCP client npm run dev:standalone # Development standalone mode# Validate your complete configuration npm run validate-config # Or use the comprehensive MCP validation tool node dist/index.js # Then call: validate_configuration_comprehensiveAthena Protocol supports 14 providers including:
- Cloud Providers: OpenAI, Anthropic, Google, Azure OpenAI, AWS Bedrock, Vertex AI
- Specialized: OpenRouter (400+ models), Groq, Mistral, Perplexity, XAI, Qwen
- Local/Self-Hosted: Ollama, ZAI
All providers require API keys (except Ollama for local models). See configuration section for setup.
- Focused Validation: Validates essential aspects of reasoning with streamlined communication
- Dual-Agent Architecture: Primary agent and validation agent work in partnership
- Confidence Scoring: Explicit confidence levels to guide decision-making
- Loop Prevention: Maximum 3 exchanges per task to prevent analysis paralysis
- Essential Information Only: Share what's necessary for effective validation
- Actionable Outputs: Clear, specific recommendations that can be immediately applied
- Progressive Refinement: Start broad, get specific only when needed
- Session Management: Maintains persistent validation sessions across multiple attempts
- MCP Server Mode: Full integration with MCP clients (Claude Desktop, Cline, etc.)
- Standalone Mode: Independent testing and verification without MCP client
The Athena Protocol MCP Server provides the following tools for thinking validation and analysis:
Validate the primary agent's thinking process with focused, essential information.
Required Parameters:
thinking(string): Brief explanation of the approach and reasoningproposedChange(object): Details of the proposed changedescription(string, required): What will be changedcode(string, optional): The actual code changefiles(array, optional): Files that will be affected
context(object): Context for the validationproblem(string, required): Brief problem descriptiontechStack(string, required): Technology stack (react|node|python etc)constraints(array, optional): Key constraints
urgency(string): Urgency level (low,medium, orhigh)projectContext(object): Project context for file analysisprojectRoot(string, required): Absolute path to project rootworkingDirectory(string, optional): Current working directoryanalysisTargets(array, REQUIRED): Specific code sections with targeted readingfile(string, required): File path (relative or absolute)mode(string, optional): Read mode -full,head,tail, orrangelines(number, optional): Number of lines (for head/tail modes)startLine(number, optional): Start line number (for range mode, 1-indexed)endLine(number, optional): End line number (for range mode, 1-indexed)priority(string, optional): Analysis priority -critical,important, orsupplementary
projectBackground(string): Brief project description to prevent hallucination
Optional Parameters:
sessionId(string): Session ID for context persistenceprovider(string): LLM provider override (openai, anthropic, google, etc.)
Output:
Returns validation results with confidence score, critical issues, recommendations, and test cases.
Quickly identify key impacts of proposed changes.
Required Parameters:
change(object): Details of the changedescription(string, required): What is being changedcode(string, optional): The code changefiles(array, optional): Affected files
projectContext(object): Project context (same structure as thinking_validation)projectRoot(string, required)analysisTargets(array, REQUIRED): Files to analyze with read modesworkingDirectory(optional)
projectBackground(string): Brief project description
Optional Parameters:
systemContext(object): System architecture contextarchitecture(string): Brief architecture descriptionkeyDependencies(array): Key system dependencies
sessionId(string): Session ID for context persistenceprovider(string): LLM provider override
Output:
Returns overall risk assessment, affected areas, cascading risks, and quick tests to run.
Rapidly validate key assumptions without over-analysis.
Required Parameters:
assumptions(array): List of assumption strings to validatecontext(object): Validation contextcomponent(string, required): Component nameenvironment(string, required): Environment (production, development, staging, testing)
projectContext(object): Project context (same structure as thinking_validation)projectRoot(string, required)analysisTargets(array, REQUIRED): Files to analyze with read modes
projectBackground(string): Brief project description
Optional Parameters:
sessionId(string): Session ID for context persistenceprovider(string): LLM provider override
Output:
Returns valid assumptions, risky assumptions with mitigations, and quick verification steps.
Identify critical dependencies efficiently.
Required Parameters:
change(object): Details of the changedescription(string, required): Brief change descriptionfiles(array, optional): Files being modifiedcomponents(array, optional): Components being changed
projectContext(object): Project context (same structure as thinking_validation)projectRoot(string, required)analysisTargets(array, REQUIRED): Files to analyze with read modes
projectBackground(string): Brief project description
Optional Parameters:
sessionId(string): Session ID for context persistenceprovider(string): LLM provider override
Output:
Returns critical and secondary dependencies, with impact analysis and test focus areas.
Optimize thinking approach based on problem type.
Required Parameters:
problemType(string): Type of problem (bug_fix,feature_impl, orrefactor)complexity(string): Complexity level (simple,moderate, orcomplex)timeConstraint(string): Time constraint (tight,moderate, orflexible)currentApproach(string): Brief description of current thinkingprojectContext(object): Project context (same structure as thinking_validation)projectRoot(string, required)analysisTargets(array, REQUIRED): Files to analyze with read modes
projectBackground(string): Brief project description
Optional Parameters:
sessionId(string): Session ID for context persistenceprovider(string): LLM provider override
Output:
Returns a comprehensive optimization strategy including:
- optimizedStrategy: Recommended approach, tools to use, time allocation breakdown, success probability, and key focus areas
- tacticalPlan: Detailed implementation guidance with problem classification, grep search strategies, key findings hypotheses, decision points, step-by-step implementation plan, testing strategy, risk mitigation, progress checkpoints, and value/effort assessment
- metadata: Provider used and file analysis metrics
Check the health status and configuration of the Athena Protocol server.
Parameters: None
Output:
Returns default provider, list of active providers with valid API keys, configuration status, and system health information.
Manage thinking validation sessions for context persistence and progress tracking.
Required Parameters:
action(string): Session action -create,get,update,list, ordelete
Optional Parameters:
sessionId(string): Session ID (required for get, update, delete actions)tags(array): Tags to categorize the sessiontitle(string): Session title/description (for create/update)
Output:
Returns session information or list of sessions depending on the action.
All tools now support Smart Client Mode with analysisTargets for precision targeting:
Benefits:
- 70-85% token reduction by reading only relevant code sections
- 3-4× faster with concurrent file reading
- Mode-based reading: full, head (first N lines), tail (last N lines), range (lines X-Y)
- Priority processing: critical → important → supplementary
Example:
{ "projectContext": { "projectRoot": "/path/to/project", "analysisTargets": [ { "file": "src/auth.ts", "mode": "range", "startLine": 45, "endLine": 78, "priority": "critical" }, { "file": "src/config.ts", "mode": "head", "lines": 20, "priority": "supplementary" } ] } }Note: All tools require analysisTargets for file analysis. Provide at least one file with appropriate read mode (full, head, tail, or range).
The persistent memory system (thinking-memory.json) is currently under review and pending refactoring. While functional, it:
- Creates a memory file in the project root directory
- Persists validation history across sessions
- May require manual cleanup during testing/development
Planned improvements:
- Move storage to
.gitignore'd directory (e.g.athena-memory/) - Add automatic cleanup mechanisms
- Enhanced session management
- Improved file path handling
For production use, consider this feature as experimental until the refactor is complete.
Athena Protocol supports two configuration methods with clear priority ordering:
- MCP Client Environment Variables (highest priority - recommended for npm installations)
- Local .env File (fallback - for local development)
- System Environment Variables (lowest priority)
For npm-published usage, configure all settings directly in your MCP client's env field. For local development, continue using .env files.
While Athena Protocol supports 14 LLM providers, only the following have been thoroughly tested:
- OpenAI
- ZAI
- Mistral
- OpenRouter
- Groq
Other providers (Anthropic, Qwen, XAI, Perplexity, Ollama, Azure, Bedrock, Vertex) are configured and should work, but have not been extensively tested. If you encounter issues with any provider, please open an issue with:
- Provider name and model
- Error messages or unexpected behavior
- Your MCP configuration or
.envconfiguration (redact API keys)
This server is designed specifically for LLM coding agents. Contributions should focus on:
- Adding new LLM providers
- Improving validation effectiveness
- Enhancing context awareness
- Expanding validation coverage
- Optimizing memory management
- Adding new validation strategies
MIT License - see LICENSE file for details.
