Introduction
AI is rapidly evolving and disrupting many industries. Initially, LLM agents were only able to provide us insights based on their training data, which was very limited and this prevented us from using AI agent from obtaining real-time insights and using it efficiently in an enterprise context.
The introduction of MCP (Model Context Protocol) became a game changer in this field and it enabled the AI agents to get the real-time information and also to access enterprise data, which unlocked a whole new scope for using AI agents and implementing powerful new use-cases.
In this post, I would like to share my learning about MCP Server and also a walk through a sample project where I am getting the factory machine data stored in a PostgreSQL database using MCP Server and the AI agent invokes the tools exposed by the MCP server for obtaining the machine data based on the user input and provide user with a structured response.
Full code for this project can be found on Link
Before we get into the details of the project implementation, Let us first understand what is MCP, what is the benefit of using it and how the it unlocks new abilities for the AI agents.
MCP Server
MCP (Model Context Protocol) was introduced by Anthropic in November 2024, it provides a standardized way for AI applications to connect with external systems and data. With MCP, LLMs are no longer limited to static training data, now they can access real-time information and enterprise systems securely.
MCP Server exposes 3 Primitives
- Tools: Executable functions that AI applications can invoke (e.g., API calls, database queries)
- Resources: Data sources that provide context (e.g., file contents, database records, API responses)
- Prompts: Structured templates that help guide LLMs (e.g., system prompts, few-shot examples)
Benefits of using MCP Server in Enterprise context
- Secure Access: Secure data access only to authorized programs.
- Restricted Access: Fine grained control over data used by AI agents.
- Standard Integration: Instead of building custom connectors for every AI agent, MCP provides a common protocol for connecting to databases, APIs, and internal tools.
- Auditability & Compliance: Every tool call and database request by the AI agent can be logged, which helps to meet the compliance requirement and track AI behavior.
Project Architecture
In this project, I have setup a MCP Server that exposes factory data stored in a PostgreSQL as tool. The chat-client application communicates with MCP Server using Streamable HTTP calls. In this implementation, I have seeded the PostgreSQL database using a sample Industrial Energy Forecast Dataset obtained from Kaggle. The dataset can be downloaded from Link
- MCP Server: Provides tools which AI agent calls to get the machine data stored in database.
- PostgreSQL: Stores the seeded dataset from Kaggle.
- Ollama: Local LLM (llama3.1)
- Chat Client: An express application which interacts with MCP server and LLM to get response for the user request.
Setup
- Start services
docker compose up -d
- Pull llm model for llama
docker exec -it ollama ollama pull llama3.1
Core Application logic
MCP Server
In this project, MCP Server as an express application that exposes tools the AI agents can use to fetch the requested data on demand.
I have created a function registerTools
which registers all the available tools provided by the MCP server in tools.ts
(mcp-server/src/modules/tools.ts
). Currently, I have only one tool get-machine-record
which gets the latest record of a given machineId
from the postgres database.
Each tool is defined with following components:
- Name: identifier for the tool (e.g., "get-machine-record")
- Input Schema: JSON schema describing required arguments (e.g., { "device": "string" })
- Callback: the function that contains the logic (e.g., SQL query).
Example:
server.tool( "get-machine-record", "Gets latest record for a machine", { machineId: z.string(), }, async ({ machineId }) => { // Fetch data from PostgreSQL console.log(`Fetching record for machine ${machineId}`); const machineRecord = await getLatestMachineRecord(machineId); if (machineRecord) { return { content: [{ type: "text", text: JSON.stringify(machineRecord) }], }; } else { return { content: [{ type: "text", text: "[]" }], }; } } );
Chat Client
Chat client is also an express application which exposes 2 APIs.
1. Direct MCP query
http://localhost:4001/machines/${machineID}
→ Fetch raw machine data of the mentioned machineID
from the MCP server by directly calling the get-machine-record
tool.
2. AI-assisted query
http://localhost:4001/chat
→ This API provides AI generated response for the requested details of a machine. The core logic is implemented in chat.service.ts
(chat-client/src/services/chat.services.ts
).
Workflow:
- The user message is converted into a prompt and sent to Ollama (local LLM).
- If the LLM decides it needs external data, it calls the tool provided by the MCP server.
- The tool fetches the machine data from PostgreSQL and provides back to the chat client.
- The result is then fed back into the LLM.
- The LLM generates a refined, human-readable answer for the user.
Next Steps 🚀
For the next iteration of this project, I am planning to implement below features:
1. Integrate live data feeds: Ingest real-time factory telemetry into PostgreSQL, so the AI agent works with fresh data.
2. Add authentication to the MCP Server: Restrict access to only authorized applications and users.
3. Expand toolset: Create more tools
4. Support multiple MCP Servers: Allow the client to connect to and aggregate responses from different MCP servers
Top comments (0)