DEV Community

Cover image for Advanced Strands Agents with MCP Servers: Real-World Automation Examples
Chetan Hirapara ๐Ÿ‡ฎ๐Ÿ‡ณ for AWS Community Builders

Posted on • Edited on

Advanced Strands Agents with MCP Servers: Real-World Automation Examples

Based on the comprehensive blog post about Amazon Strands, let me now demonstrate practical code examples that leverage multiple MCP servers to solve real-world automation problems. These examples showcase the power of combining Strands' model-first approach with specialized MCP servers for complex workflow automation.

Example 1: Intelligent Lead Generation and CRM Management System

This example demonstrates a sophisticated lead generation system that combines web research, CRM updates, and GitHub integration for sales automation.

Setup and Installation

# Install required packages pip install strands-agents strands-agents-tools pip install tavily-python firecrawl-py hubspot-api-client pip install python-github-api # Set environment variables export TAVILY_API_KEY="your_tavily_key" export FIRECRAWL_API_KEY="your_firecrawl_key" export HUBSPOT_ACCESS_TOKEN="your_hubspot_token" export GITHUB_TOKEN="your_github_token" export OPENAI_API_KEY="your_openai_key" # or other model provider 
Enter fullscreen mode Exit fullscreen mode

MCP Server Configurations
First, let's set up the MCP servers we'll be using:

# mcp_servers_config.py import os from mcp.client.stdio import stdio_client, StdioServerParameters from mcp.client.sse import sse_client from strands.tools.mcp import MCPClient # Tavily Search MCP Server def create_tavily_mcp_client(): """Create Tavily MCP client for web search capabilities""" return MCPClient(lambda: sse_client( f"https://mcp.tavily.com/mcp/?tavilyApiKey={os.getenv('TAVILY_API_KEY')}" )) # Firecrawl MCP Server def create_firecrawl_mcp_client(): """Create Firecrawl MCP client for web scraping""" return MCPClient(lambda: sse_client( f"https://mcp.firecrawl.dev/{os.getenv('FIRECRAWL_API_KEY')}/v2/sse" )) # Sequential Thinking MCP Server def create_sequential_thinking_mcp_client(): """Create Sequential Thinking MCP client for structured problem solving""" server_params = StdioServerParameters( command="npx", args=["-y", "@modelcontextprotocol/server-sequential-thinking"] ) return MCPClient(lambda: stdio_client(server_params)) # HubSpot MCP Server def create_hubspot_mcp_client(): """Create HubSpot MCP client for CRM operations""" server_params = StdioServerParameters( command="npx", args=["-y", "hubspot-mcp-server"], env={"HUBSPOT_ACCESS_TOKEN": os.getenv("HUBSPOT_ACCESS_TOKEN")} ) return MCPClient(lambda: stdio_client(server_params)) # GitHub MCP Server (custom implementation) def create_github_mcp_client(): """Create GitHub MCP client for repository operations""" server_params = StdioServerParameters( command="npx", args=["-y", "github-mcp-server"], env={"GITHUB_TOKEN": os.getenv("GITHUB_TOKEN")} ) return MCPClient(lambda: stdio_client(server_params)) 
Enter fullscreen mode Exit fullscreen mode

Main Lead Generation System

# lead_generation_system.py import asyncio import json from datetime import datetime, timedelta from strands import Agent from strands.models.openai import OpenAIModel from mcp_servers_config import ( create_tavily_mcp_client, create_firecrawl_mcp_client, create_sequential_thinking_mcp_client, create_hubspot_mcp_client, create_github_mcp_client ) class IntelligentLeadGenerationSystem: def __init__(self): # Initialize model self.model = OpenAIModel( api_key=os.getenv("OPENAI_API_KEY"), model="gpt-4-turbo", temperature=0.7 ) # Initialize MCP clients self.tavily_client = create_tavily_mcp_client() self.firecrawl_client = create_firecrawl_mcp_client() self.sequential_client = create_sequential_thinking_mcp_client() self.hubspot_client = create_hubspot_mcp_client() self.github_client = create_github_mcp_client() async def execute_lead_generation_workflow(self, industry: str, company_size: str, location: str): """ Execute comprehensive lead generation workflow combining multiple MCP servers """ # Use all MCP servers in context managers with (self.tavily_client, self.firecrawl_client, self.sequential_client, self.hubspot_client, self.github_client): # Get tools from all MCP servers tavily_tools = self.tavily_client.list_tools_sync() firecrawl_tools = self.firecrawl_client.list_tools_sync() sequential_tools = self.sequential_client.list_tools_sync() hubspot_tools = self.hubspot_client.list_tools_sync() github_tools = self.github_client.list_tools_sync() # Combine all tools all_tools = (tavily_tools + firecrawl_tools + sequential_tools + hubspot_tools + github_tools) # Create specialized agents for different tasks research_agent = self.create_research_agent(all_tools) analysis_agent = self.create_analysis_agent(all_tools) crm_agent = self.create_crm_agent(all_tools) # Execute the workflow results = await self.run_workflow( research_agent, analysis_agent, crm_agent, industry, company_size, location ) return results def create_research_agent(self, tools): """Create agent specialized in web research and data collection""" system_prompt = """ You are a specialized lead generation research agent with access to powerful web search and scraping tools. Your responsibilities: 1. Use sequential_thinking to break down research tasks methodically 2. Use tavily_web_search to find potential leads and industry information 3. Use firecrawl_scrape to extract detailed company information from websites 4. Focus on finding companies that match the specified criteria 5. Extract key contact information, company details, and business context Always use sequential thinking to plan your research approach before executing searches. Prioritize recent, accurate, and relevant information. """ return Agent( model=self.model, tools=tools, system_prompt=system_prompt ) def create_analysis_agent(self, tools): """Create agent specialized in data analysis and qualification""" system_prompt = """ You are a lead qualification and analysis specialist. Your responsibilities: 1. Use sequential_thinking to systematically analyze lead quality 2. Score leads based on fit criteria (industry, size, location, technology stack) 3. Use github tools to research company's technical stack and activity 4. Identify decision makers and potential pain points 5. Recommend personalized outreach strategies Focus on quality over quantity - provide detailed analysis for high-value prospects. """ return Agent( model=self.model, tools=tools, system_prompt=system_prompt ) def create_crm_agent(self, tools): """Create agent specialized in CRM operations and data management""" system_prompt = """ You are a CRM management specialist focused on lead data organization. Your responsibilities: 1. Use hubspot tools to create and update contact/company records 2. Organize leads with appropriate tags, properties, and pipeline stages 3. Create follow-up tasks and reminders 4. Ensure data quality and avoid duplicates 5. Set up automated workflows for lead nurturing Always verify existing records before creating new ones to prevent duplicates. Use consistent naming conventions and data formatting. """ return Agent( model=self.model, tools=tools, system_prompt=system_prompt ) async def run_workflow(self, research_agent, analysis_agent, crm_agent, industry, company_size, location): """Execute the complete lead generation workflow""" # Step 1: Research Phase research_query = f""" I need to find high-quality leads for our B2B software solution. Target criteria: - Industry: {industry} - Company size: {company_size} - Location: {location} Please use sequential thinking to plan your research approach, then: 1. Search for companies matching these criteria using Tavily 2. Find their websites and key decision makers 3. Use Firecrawl to extract detailed company information from their websites 4. Look for technology adoption signals and pain points we could address Focus on finding 10-15 high-quality prospects with complete information. """ print("๐Ÿ” Starting lead research phase...") research_results = research_agent(research_query) # Step 2: Analysis Phase analysis_query = f""" Based on the research results below, please analyze and qualify these leads: {research_results.message} For each company, please: 1. Use sequential thinking to systematically evaluate lead quality 2. Check their GitHub presence and technical activity 3. Score them on a 1-10 scale based on our ideal customer profile 4. Identify specific pain points our solution could address 5. Recommend personalized outreach approaches Prioritize the top 5-7 leads for immediate outreach. """ print("๐Ÿ“Š Analyzing and qualifying leads...") analysis_results = analysis_agent(analysis_query) # Step 3: CRM Integration Phase crm_query = f""" Based on the qualified leads from our analysis, please: Research Results: {research_results.message} Analysis Results: {analysis_results.message} 1. Create company records in HubSpot for the top qualified leads 2. Create contact records for identified decision makers 3. Set appropriate lead scores and pipeline stages 4. Create follow-up tasks with personalized notes for the sales team 5. Tag leads with relevant industry and qualification information Ensure all data is properly formatted and categorized for easy follow-up. """ print("๐Ÿ’ผ Updating CRM with qualified leads...") crm_results = crm_agent(crm_query) return { "research_results": research_results.message, "analysis_results": analysis_results.message, "crm_results": crm_results.message, "timestamp": datetime.now().isoformat(), "criteria": { "industry": industry, "company_size": company_size, "location": location } } # Usage example async def main(): system = IntelligentLeadGenerationSystem() # Execute lead generation for fintech startups results = await system.execute_lead_generation_workflow( industry="Financial Technology", company_size="50-200 employees", location="San Francisco Bay Area" ) print("โœ… Lead generation workflow completed!") print(json.dumps(results, indent=2)) if __name__ == "__main__": asyncio.run(main()) 
Enter fullscreen mode Exit fullscreen mode

Example 2: Intelligent DevOps Incident Response System

This example demonstrates an automated incident response system that combines AWS monitoring, GitHub issue creation, and structured problem-solving.

# devops_incident_response.py import os import json from datetime import datetime from strands import Agent from strands.models.anthropic import AnthropicModel from mcp_servers_config import ( create_sequential_thinking_mcp_client, create_github_mcp_client ) # AWS MCP Server setup from mcp.client.stdio import stdio_client, StdioServerParameters from strands.tools.mcp import MCPClient def create_aws_mcp_client(): """Create AWS MCP client for cloud operations""" server_params = StdioServerParameters( command="uvx", args=["awslabs.lambda-tool-mcp-server@latest"], env={ "AWS_PROFILE": os.getenv("AWS_PROFILE", "default"), "AWS_REGION": os.getenv("AWS_REGION", "us-east-1") } ) return MCPClient(lambda: stdio_client(server_params)) class DevOpsIncidentResponseSystem: def __init__(self): self.model = AnthropicModel( api_key=os.getenv("ANTHROPIC_API_KEY"), model="claude-3-5-sonnet-20241022", max_tokens=4000 ) # Initialize MCP clients self.sequential_client = create_sequential_thinking_mcp_client() self.aws_client = create_aws_mcp_client() self.github_client = create_github_mcp_client() async def handle_incident(self, alert_data: dict): """ Handle production incident with automated response workflow """ with (self.sequential_client, self.aws_client, self.github_client): # Get tools from all MCP servers sequential_tools = self.sequential_client.list_tools_sync() aws_tools = self.aws_client.list_tools_sync() github_tools = self.github_client.list_tools_sync() all_tools = sequential_tools + aws_tools + github_tools # Create incident response agent incident_agent = Agent( model=self.model, tools=all_tools, system_prompt=""" You are an expert DevOps incident response specialist with access to AWS monitoring, GitHub issue tracking, and structured problem-solving tools. Your workflow: 1. Use sequential_thinking to systematically analyze the incident 2. Use AWS tools to gather system metrics and logs 3. Determine root cause and impact assessment 4. Create detailed GitHub issues with action items 5. Implement immediate mitigation if possible 6. Document lessons learned and preventive measures Focus on rapid response, clear communication, and thorough documentation. """ ) # Process the incident incident_query = f""" PRODUCTION INCIDENT DETECTED: Alert Data: {json.dumps(alert_data, indent=2)} Please execute our incident response protocol: 1. Use sequential thinking to break down the incident analysis systematically 2. Check AWS resources related to this alert (EC2 instances, Lambda functions, RDS, etc.) 3. Gather relevant logs and metrics to understand the scope 4. Determine immediate mitigation steps 5. Create a detailed GitHub issue in our incident-tracking repository 6. If safe, implement immediate fixes using AWS tools 7. Document timeline and next steps for team review This is a {alert_data.get('severity', 'medium')} severity incident. Time is critical - prioritize rapid assessment and mitigation. """ print(f"๐Ÿšจ Processing {alert_data.get('severity', 'medium')} severity incident...") response = incident_agent(incident_query) return { "incident_id": alert_data.get('incident_id'), "response": response.message, "timestamp": datetime.now().isoformat(), "severity": alert_data.get('severity'), "status": "processed" } # Usage example async def main(): system = DevOpsIncidentResponseSystem() # Simulate incident alert alert_data = { "incident_id": "INC-2024-001", "severity": "high", "service": "user-api", "alert_type": "high_error_rate", "message": "Error rate exceeded 5% threshold", "affected_resources": ["user-api-prod", "user-db-cluster"], "metrics": { "error_rate": "7.3%", "response_time": "2.1s", "affected_users": "~1200" } } result = await system.handle_incident(alert_data) print("โœ… Incident response completed!") print(json.dumps(result, indent=2)) if __name__ == "__main__": asyncio.run(main()) 
Enter fullscreen mode Exit fullscreen mode

Example 3: Intelligent Content Marketing Automation

This example demonstrates content marketing automation combining web research, content analysis, and social media management.

# content_marketing_automation.py import os from datetime import datetime from strands import Agent from strands.models.openai import OpenAIModel from mcp_servers_config import ( create_tavily_mcp_client, create_firecrawl_mcp_client, create_sequential_thinking_mcp_client, create_hubspot_mcp_client ) class ContentMarketingAutomationSystem: def __init__(self): self.model = OpenAIModel( api_key=os.getenv("OPENAI_API_KEY"), model="gpt-4-turbo", temperature=0.8 ) # Initialize MCP clients self.tavily_client = create_tavily_mcp_client() self.firecrawl_client = create_firecrawl_mcp_client() self.sequential_client = create_sequential_thinking_mcp_client() self.hubspot_client = create_hubspot_mcp_client() async def create_content_campaign(self, topic: str, target_audience: str, content_types: list): """ Create comprehensive content marketing campaign """ with (self.tavily_client, self.firecrawl_client, self.sequential_client, self.hubspot_client): # Get all tools all_tools = ( self.tavily_client.list_tools_sync() + self.firecrawl_client.list_tools_sync() + self.sequential_client.list_tools_sync() + self.hubspot_client.list_tools_sync() ) # Create content strategy agent content_strategist = Agent( model=self.model, tools=all_tools, system_prompt=""" You are an expert content marketing strategist with access to web research, content analysis, and CRM tools. Your workflow: 1. Use sequential_thinking to plan content strategy systematically 2. Research trending topics and competitor content using Tavily 3. Analyze high-performing content using Firecrawl 4. Create content calendar and campaigns in HubSpot 5. Develop personalized content for different audience segments Focus on data-driven content strategies that drive engagement and conversions. """ ) campaign_query = f""" I need to create a comprehensive content marketing campaign: Topic: {topic} Target Audience: {target_audience} Content Types: {', '.join(content_types)} Please execute our content strategy process: 1. Use sequential thinking to plan the campaign systematically 2. Research current trends and popular content around this topic using Tavily 3. Analyze competitor content and high-performing pieces using Firecrawl 4. Identify content gaps and opportunities 5. Create a 30-day content calendar with specific topics and formats 6. Set up campaign tracking and audience segmentation in HubSpot 7. Provide specific content briefs for each piece Focus on creating content that educates, engages, and converts our target audience. """ print(f"๐Ÿ“ Creating content campaign for: {topic}") result = content_strategist(campaign_query) return { "campaign": result.message, "topic": topic, "audience": target_audience, "content_types": content_types, "created_at": datetime.now().isoformat() } # Usage example async def main(): system = ContentMarketingAutomationSystem() result = await system.create_content_campaign( topic="AI-Powered Business Automation", target_audience="Small business owners and entrepreneurs", content_types=["blog posts", "social media content", "email campaigns", "case studies"] ) print("โœ… Content campaign created!") print(result['campaign']) if __name__ == "__main__": asyncio.run(main()) 
Enter fullscreen mode Exit fullscreen mode

Example 4: AWS Infrastructure Optimization System

# aws_infrastructure_optimization.py import os from strands import Agent from strands.models.anthropic import AnthropicModel from mcp_servers_config import create_sequential_thinking_mcp_client # Additional AWS-specific MCP clients def create_aws_cost_optimization_client(): server_params = StdioServerParameters( command="uvx", args=["awslabs.aws-cost-optimization-mcp-server@latest"], env={"AWS_PROFILE": os.getenv("AWS_PROFILE", "default")} ) return MCPClient(lambda: stdio_client(server_params)) def create_aws_cloudwatch_client(): server_params = StdioServerParameters( command="uvx", args=["awslabs.cloudwatch-mcp-server@latest"], env={"AWS_PROFILE": os.getenv("AWS_PROFILE", "default")} ) return MCPClient(lambda: stdio_client(server_params)) class AWSInfrastructureOptimizer: def __init__(self): self.model = AnthropicModel( api_key=os.getenv("ANTHROPIC_API_KEY"), model="claude-3-5-sonnet-20241022" ) self.sequential_client = create_sequential_thinking_mcp_client() self.cost_client = create_aws_cost_optimization_client() self.cloudwatch_client = create_aws_cloudwatch_client() async def optimize_infrastructure(self, environment: str): """ Analyze and optimize AWS infrastructure costs and performance """ with (self.sequential_client, self.cost_client, self.cloudwatch_client): all_tools = ( self.sequential_client.list_tools_sync() + self.cost_client.list_tools_sync() + self.cloudwatch_client.list_tools_sync() ) optimizer_agent = Agent( model=self.model, tools=all_tools, system_prompt=""" You are an AWS infrastructure optimization specialist. Your responsibilities: 1. Use sequential_thinking to systematically analyze infrastructure 2. Analyze cost patterns and identify optimization opportunities 3. Review CloudWatch metrics for performance optimization 4. Recommend right-sizing, reserved instances, and architectural improvements 5. Provide implementation roadmap with priority and impact estimates Focus on maximizing cost efficiency while maintaining performance and reliability. """ ) optimization_query = f""" Please analyze and optimize our AWS infrastructure for the {environment} environment: 1. Use sequential thinking to plan a comprehensive infrastructure review 2. Analyze current costs and spending patterns across all services 3. Review CloudWatch metrics to identify underutilized resources 4. Check for opportunities to use spot instances, reserved capacity, etc. 5. Identify architectural improvements for better cost efficiency 6. Provide prioritized recommendations with estimated savings 7. Create implementation timeline with risk assessment Focus on both immediate wins and longer-term strategic optimizations. """ print(f"โšก Optimizing {environment} infrastructure...") result = optimizer_agent(optimization_query) return result.message # Usage async def main(): optimizer = AWSInfrastructureOptimizer() result = await optimizer.optimize_infrastructure("production") print("โœ… Infrastructure optimization completed!") print(result) if __name__ == "__main__": asyncio.run(main()) 
Enter fullscreen mode Exit fullscreen mode

Key Benefits of This Approach

  1. Model-First Intelligence
    Strands' model-first approach means each agent can dynamically adapt its strategy based on the specific situation, rather than following rigid pre-programmed workflows.

  2. Seamless MCP Integration
    The framework's native MCP support allows agents to discover and use tools dynamically, creating flexible automation systems that can evolve with new capabilities.

  3. Production-Ready Architecture
    Built-in observability, error handling, and scalability features make these examples suitable for production deployment.

  4. Multi-Agent Orchestration
    The examples demonstrate how specialized agents can collaborate on complex tasks, each leveraging different MCP servers for their domain expertise.

Running the Examples

  1. Install dependencies:
pip install strands-agents strands-agents-tools pip install tavily-python firecrawl-py hubspot-api-client 
Enter fullscreen mode Exit fullscreen mode
  1. Configure MCP servers:
# Install MCP servers npx -y @modelcontextprotocol/server-sequential-thinking npx -y hubspot-mcp-server npx -y github-mcp-server 
Enter fullscreen mode Exit fullscreen mode
  1. Set environment variables:
export TAVILY_API_KEY="your_key" export FIRECRAWL_API_KEY="your_key" export HUBSPOT_ACCESS_TOKEN="your_token" export GITHUB_TOKEN="your_token" export OPENAI_API_KEY="your_key" # or ANTHROPIC_API_KEY 
Enter fullscreen mode Exit fullscreen mode
  1. Run the examples:
python lead_generation_system.py python devops_incident_response.py python content_marketing_automation.py python aws_infrastructure_optimization.py 
Enter fullscreen mode Exit fullscreen mode

These examples demonstrate how Strands Agents combined with specialized MCP servers can create powerful, intelligent automation systems that adapt to complex real-world scenarios. The model-first approach allows for sophisticated reasoning and decision-making, while MCP servers provide the specialized tools needed for domain-specific tasks.

If you like the article and would like to support me, make sure to:

๐Ÿ“ฐ View more content on my Medium profile
๐Ÿ”” Follow Me: LinkedIn | Medium

Top comments (0)