A comprehensive tutorial for implementing Model Context Protocol (MCP) servers and clients in Python, with Ollama integration for enhanced AI interactions.
This project demonstrates how to build MCP servers and clients from scratch in Python. MCP is a protocol that allows AI assistants to securely connect to data sources and tools, enabling more powerful and contextual interactions.
- MCP Server: A fully functional MCP server with tools, resources, and prompts
- MCP Client: A client that can connect to and interact with MCP servers
- Ollama Integration: Enhanced chat experience using Ollama with MCP tool support
- Interactive Mode: Command-line interface for testing MCP functionality
mcp-init/ ├── README.md # This documentation ├── requirements.txt # Python dependencies ├── setup.py # Setup script for easy installation ├── run_tests.py # Comprehensive test suite ├── config.json # Configuration example ├── start.py # Python launcher script ├── start.ps1 # PowerShell launcher script ├── test_server.py # Simple server test script ├── .gitignore # Git ignore file ├── .github/ │ └── workflows/ │ └── test.yml # GitHub Actions CI workflow ├── server/ │ └── server.py # MCP Server implementation └── client/ ├── client.py # MCP Client implementation ├── ollama_integration.py # Ollama + MCP integration └── quick_test.py # Quick integration test server/server.py: Complete MCP server implementation with tools, resources, and promptsclient/client.py: MCP client with demo and interactive modesclient/ollama_integration.py: Integration layer between Ollama and MCPclient/quick_test.py: Quick test script for Ollama + MCP integrationstart.py: User-friendly launcher for all demonstration modestest_server.py: Simple script to verify server functionalityconfig.json: Example configuration for MCP setup
- Python 3.8+ installed
- Ollama installed and running locally (for AI integration)
- Install from: https://ollama.ai/
- Pull a model:
ollama pull llama3.2orollama pull deepseek-r1:8b - Start Ollama:
ollama serve(if not running as a service)
Note: The integration automatically detects available models and uses the best one available. Models like
deepseek-r1:8bshow reasoning steps, whilellama3.2provides direct responses.
- Clone this repository:
git clone https://github.com/prashplus/mcp-init.git cd mcp-init- Install Python dependencies:
# Option 1: Use the setup script (recommended) python setup.py # Option 2: Manual installation pip install -r requirements.txt- Verify installation:
python run_tests.py- (Optional) Verify Ollama is running for AI integration:
# Check if Ollama is accessible curl http://localhost:11434/api/tags # Or on Windows PowerShell Invoke-RestMethod -Uri "http://localhost:11434/api/tags" -Method GetThe fastest way to get started is using the launcher script:
# Python launcher (cross-platform) python start.py # Or PowerShell launcher (Windows) .\start.ps1This will show you a menu with all available demonstration modes.
Run the basic demo to see MCP in action:
cd client python client.pyThis will:
- Start the MCP server
- Connect the client to the server
- Demonstrate tools (echo, calculator, time)
- Show resources (greeting, system info)
- Display available prompts
For hands-on testing:
cd client python client.py interactiveAvailable commands:
tools- List available toolsecho <message>- Test echo toolcalc <expression>- Test calculator (e.g.,calc 10 + 5 * 2)time- Get current timeresources- List available resourcesgreeting- Read greeting resourcesysinfo- Read system informationprompts- List available promptsquit- Exit
Test the integration with predefined queries:
cd client python ollama_integration.pyChat with Ollama enhanced by MCP tools:
cd client python ollama_integration.py chatRun a quick verification of the integration:
cd client python quick_test.pyTry these example queries:
- "What is 25 * 47?" (uses calculator tool)
- "What time is it?" (uses time tool)
- "Echo back 'Hello World!'" (uses echo tool)
- "Tell me about machine learning" (regular AI response)
Commands in chat mode:
model <name>- Switch Ollama model (e.g.,model llama3.2:latest)models- List all available modelstools- List available MCP toolsquit- Exit chat
Model Tips:
llama3.2:latest- Fast, direct responsesdeepseek-r1:8b- Shows reasoning process (good for learning)codellama:7b- Specialized for coding tasks
- echo - Echo back input messages
- calculate - Perform mathematical calculations
- get_time - Get current date and time
- greeting - A simple greeting message
- system_info - Basic system information (platform, Python version, etc.)
- helpful_assistant - A customizable helpful assistant prompt
- Initialization: Client connects to server and exchanges capabilities
- Discovery: Client lists available tools, resources, and prompts
- Interaction: Client calls tools, reads resources, or gets prompts
- Transport: Communication happens over JSON-RPC via stdio
- User sends a message to the enhanced chat
- System analyzes if MCP tools could help
- Ollama determines which tool to use and with what parameters
- MCP client executes the tool
- Ollama incorporates tool results into the final response
To add new tools, modify server/server.py:
# In setup_tools method, add: "new_tool": { "name": "new_tool", "description": "Description of what this tool does", "inputSchema": { "type": "object", "properties": { "param1": { "type": "string", "description": "Parameter description" } }, "required": ["param1"] } } # In handle_tool_call method, add: elif tool_name == "new_tool": param1 = arguments.get("param1", "") result = f"Tool result: {param1}"- Default URL:
http://localhost:11434 - Auto-detected models from your Ollama installation
- Default model: First available model (usually
llama3.2:latestordeepseek-r1:8b) - Timeout: 30 seconds
- Uses stdio (stdin/stdout) for communication
- JSON-RPC 2.0 protocol
- Supports async operations
-
"Failed to connect to MCP server"
- Ensure Python can execute
server/server.py - Check that the server script has no syntax errors
- Ensure Python can execute
-
"Error calling Ollama"
- Verify Ollama is running:
ollama serve - Check if models are available:
ollama list - Pull a model if needed:
ollama pull llama3.2 - Test connection:
curl http://localhost:11434/api/tags
- Verify Ollama is running:
-
Tool execution errors
- Check tool parameters match the expected schema
- Verify the server is properly handling the tool call
Enable logging for more details:
import logging logging.basicConfig(level=logging.DEBUG)- Fork the repository
- Create a feature branch
- Add your improvements
- Test thoroughly (CI will run automatically)
- Submit a pull request
This repository includes a GitHub Actions CI workflow that:
- Tests on Python 3.8, 3.9, 3.10, 3.11, and 3.12
- Runs on Ubuntu, Windows, and macOS
- Performs syntax checks and linting
- Tests basic MCP server functionality
- Validates all Python scripts compile correctly
The CI runs automatically on:
- Push to
mainordevelopbranches - Pull requests to
main
This project is provided as-is for educational purposes. Feel free to modify and distribute.
- Prashant Piprotar - - Prash+
Visit my blog for more Tech Stuff