Introduction
The future of enterprise AI isn't just about more powerful models—it's about intelligent agents that communicate and collaborate. Today, I'm sharing an implementation that demonstrates agent-to-agent AI integration for AWS security analysis, combining AI21 Maestro's requirement-driven validation with Strands Agents through the Model Context Protocol (MCP).
The Problem: Fragmented Security Analysis
Traditional AWS security analysis is time-consuming and inconsistent:
- Manual Tool Selection: Analysts must know which tools to use for different scenarios
- Inconsistent Outputs: Different AI models produce varying report formats
- Siloed Analysis: Security Hub and CloudTrail data analyzed separately
- Hours of Manual Work: Correlating findings and generating professional reports
The Solution: Multi-Agent Architecture
I've built a system where a Strands Agent intelligently calls AI21 Maestro Agent Orchestraction through MCP:
User Query → Strands Agent (Nova Premier) → MCP Tool → AI21 Maestro (Jamba Mini) → Validated Report → User
Architecture Diagram
Key Components:
- Strands Agent: Powered by Amazon Bedrock Nova Premier for reasoning and tool selection
- AI21 Maestro Tools: Specialized Security Hub and CloudTrail analysis functions
- MCP Protocol: Enables seamless communication between different AI systems
Intelligent Tool Selection: Natural language queries automatically trigger the right analysis:
- "Check my security findings" → Security Hub analysis
- "Analyze suspicious activity" → CloudTrail monitoring
AI21 Maestro's Requirements System
What makes this effective is AI21 Maestro's requirement-based validation. Instead of hoping AI follows instructions, I define explicit constraints:
security_hub_requirements = [ { "name": "markdown_format", "description": "Use proper markdown formatting with headers and code blocks" }, { "name": "prioritize_critical", "description": "Emphasize CRITICAL and HIGH severity findings requiring immediate attention" }, { "name": "actionable_recommendations", "description": "Provide specific remediation steps, not generic advice" } ]
The Generate → Validate → Fix Cycle
AI21 Maestro employs a systematic validation process:
- Generate: Creates initial response following requirements
- Validate: Scores each requirement from 0.0 to 1.0
- Fix: Refines output for requirements scoring < 1.0
- Repeat: Continues until all requirements are met
This ensures consistent, professional security reports without hallucinations or formatting issues.
Real-World Impact
Before: Traditional Approach
- Manually access AWS Security Hub console
- Export findings to spreadsheet
- Access CloudTrail console separately
- Spend hours creating formatted reports
- Risk inconsistent analysis
After: Agent-to-Agent Integration
# User simply asks in natural language user_input = "What security issues should I prioritize?" # System automatically: # 1. Understands intent (Strands Agent) # 2. Selects Security Hub analysis # 3. Calls AI21 Maestro with requirements # 4. Returns validated, professional report
Result: Thousands of findings analyzed in seconds with structured, actionable recommendations.
Technical Implementation
Security Hub Analysis Tool
@tool def analyze_aws_security_hub() -> str: # Retrieve AWS Security Hub findings findings = get_all_findings(filters={ 'RecordState': [{'Value': 'ACTIVE', 'Comparison': 'EQUALS'}] }) # Process and structure data summary = summarize_findings(findings) # Call AI21 Maestro with explicit requirements result = call_ai21_maestro_simple( security_hub_prompt, security_hub_requirements, findings_data ) return f"## AWS Security Hub Analysis Report\n\n{result}"
The AI21 Maestro Integration
The key to agent-to-agent communication is the simplified Maestro call function that properly separates context from requirements:
def call_ai21_maestro_simple(prompt, requirements, data): """Simple synchronous call to AI21 Maestro""" # Combine prompt and data as context for the task run_input = f"""{prompt} {data}""" # Use asyncio.run for simple execution async def run_maestro(): run_result = await ai21_client.beta.maestro.runs.create_and_poll( input=run_input, requirements=requirements, # Pass requirements separately for proper validation models=["jamba-mini"], # Using latest Jamba Mini model budget="low", ) return run_result.result return asyncio.run(run_maestro())
This approach follows AI21 Maestro's best practices by:
- Separating Context from Instructions: The
input
parameter contains the task context (prompt + data) - Proper Requirements Handling: Requirements are passed through the dedicated
requirements
parameter for optimal validation - Latest Model Usage: Using
jamba-mini
ensures compatibility with future model updates
This function handles the complexity of async AI21 Maestro calls while providing a clean synchronous interface for the Strands Agent tools.
Example Output
## AWS Security Hub Analysis Report ### Executive Summary Your AWS environment shows 18 active security findings with 2 CRITICAL and 5 HIGH severity issues demanding prompt remediation. ### Severity Analysis - **CRITICAL**: 2 findings requiring immediate action - EC2 instance with public access (i-0abc123def456789) - S3 bucket with unrestricted permissions ### Recommended Actions 1. **Today**: Restrict EC2 security group rules 2. **This Week**: Update S3 bucket policies 3. **This Month**: Implement AWS Config compliance rules
MCP Compatibility: Open Source Innovation
By building with MCP compatibility, these tools are:
- Cross-Framework Compatible: Work with any MCP-compatible agent system
- Reusable: Can be integrated into different AI workflows
- Standardized: Follow MCP protocols for consistent communication
- Community-Driven: Open source for broader ecosystem development
Business Impact
For Security Teams:
- Analysis time: Hours → Seconds
- Consistent quality across all reports
- Comprehensive coverage of Security Hub + CloudTrail
- Specific remediation steps, not generic advice
For Organizations:
- Faster threat identification and response
- Reduced manual effort and costs
- Compliance-ready structured reports
- Scalable security for growing AWS environments
Getting Started
Prerequisites
- Python 3.10+
- AWS credentials configured with Security Hub and CloudTrail access
- AI21 API key
- Amazon Bedrock access (Nova Premier model)
- Appropriate IAM permissions
1. Clone and Install
# Clone the repository git clone https://github.com/awsdataarchitect/agent2agent-strands-ai21-maestro.git cd agent2agent-strands-ai21-maestro # Install dependencies pip install -r requirements.txt
2. Configure Environment Variables
# Set AI21 API key for agent-to-agent communication export AI21_API_KEY=your_ai21_api_key_here # Configure AWS credentials (if not using aws configure) export AWS_ACCESS_KEY_ID=your_key export AWS_SECRET_ACCESS_KEY=your_secret export AWS_DEFAULT_REGION=us-east-1 # Or use AWS CLI configuration aws configure
3. Set Up IAM Permissions
Ensure your AWS credentials have minimum required permissions:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "securityhub:GetFindings", "cloudtrail:LookupEvents", "bedrock:InvokeModel" ], "Resource": "*" } ] }
4. Run the Multi-Agent System
# Start the Strands Agent with AI21 Maestro integration python strands_ai21_maestro_agent.py
5. Interact with Natural Language
"Analyze my Security Hub findings" → Triggers Security Hub analysis "Check CloudTrail for suspicious activity" → Invokes CloudTrail monitoring "What security issues should I prioritize?" → Agent selects best approach
Watch the Demo
Watch my AI21 Labs X AWS Heroes hackathon entry video demonstration of agent-to-agent AI integration in action
Conclusion
This implementation demonstrates that enterprise AI's future lies in intelligent agents working together. By combining Strands Agents, AI21 Maestro Agentic Orchestration, MCP, and AWS Security Services, I've created a system that transforms security analysis from hours of manual work to seconds of AI-powered insights.
Agent-to-agent communication is becoming more practical. The code is open source and I hope this example helps others explore collaborative AI systems.
References and Resources
Documentation:
- AWS Security Hub Documentation
- AI21 Maestro Documentation
- Strands Agents Documentation
- Amazon Bedrock Documentation
- Model Context Protocol Documentation
Implementation:
Find the full open source implementation on GitHub and explore building your own agent-to-agent AI systems.
Top comments (0)