DEV Community

Cover image for 🔥I Built Custom MCP Client For Algolia🌀
Kiran Naragund
Kiran Naragund Subscriber

Posted on

🔥I Built Custom MCP Client For Algolia🌀

Algolia MCP Server Challenge: Ultimate user Experience

This is a submission for the Algolia MCP Server Challenge

What I Built

I built a modern, AI-powered Algolia MCP Client, that connects to the Algolia MCP Server and uses LLMs (like Claude, Gemini, and GPT-4) to run queries, visualize data, and give you smart & intelligent insights through a friendly chat interface.

What It Does 🤔

  • Ask natural language questions about your Algolia setup/account.

  • Interact with tools via MCP Server

  • View results in rich markdown with charts, tables, and code snippets

  • Save your chat history locally

  • Switch themes for a personalized experience

It’s basically like having an AI assistant for all your Algolia operations but, with a nice UI.

Tech Breakdown 👨‍💻

Frontend
Built with React, Vite, and Tailwind CSS. It features:

  • A clean, responsive layout

  • Chat interface with markdown/code/chart rendering

  • Theme switching (light, dark, and a custom “Algolia” mode)

  • Prompt templates to speed things up or get started quicky

  • Local chat history

Backend
Powered by FastAPI (Python), it acts as a middleman between the frontend and:

  • The Algolia MCP Server

  • Claude Sonnet 4 LLM (Anthropic)

The LLM figures out which tools to call and formats everything including charts, markdown, or plain old code before sending it back.

How It Works

Architecture

  • User Interaction: The user enters a query or selects a prompt template.
  • Backend Processing: The frontend sends the query to the backend proxy server.
  • LLM & Tool Orchestration: The backend passes the query to the LLM, which may call Algolia MCP tools to fetch data.
  • Response Generation: The LLM generates a markdown response, possibly including tables, code, or chart blocks.
  • Frontend Rendering: The frontend parses the markdown, renders it, and displays any charts or tables inline.
  • Chat Persistence: The conversation is saved locally.

Demo

Live: https://algolia-mcp-client.vercel.app

Frontend Repo:

Algolia MCP Client

A modern, responsive web MCP client for interacting with the Algolia Model Context Protocol (MCP) server and LLMs (Claude, Gemini, GPT-4, etc.).

Features

  • Chat-based UI: Conversational interface for querying Algolia MCP and LLMs.
  • Prompt Templates: One-click prompt suggestions for common tasks.
  • Markdown & Code Rendering: Supports markdown, syntax-highlighted code, and copy-to-clipboard.
  • Chart Visualization: Renders charts from LLM/MCP responses using Chart.js.
  • Persistent Chat History: Chats are saved in local storage.
  • Mobile Responsive Sidebar: Overlay sidebar for mobile and desktop.
  • MCP Server & Tools Browser: View available MCP servers and tools dynamically.
  • Theme Toggle: Light/dark mode and custom Algolia theme.

Getting Started

Prerequisites

Installation

git clone https://github.com/Kiran1689/algolia-mcp-client.git cd algolia-mcp-client npm install
Enter fullscreen mode Exit fullscreen mode

Running the App

npm run dev
Enter fullscreen mode Exit fullscreen mode

The app will be available at http://localhost:5173 by default.

Connecting to MCP

Backend Repo:

MCP Client

A modular Python client for connecting to MCP servers, integrating with Anthropic Claude, and exposing a FastAPI-based API for handling queries and tool calls.

Features

  • MCP Client: Connects to an MCP server (Node or Python) over stdio, negotiates available tools and manages tool calls.

  • Claude AI Integration: Uses Anthropic Claude to generate natural language responses and decide when to invoke tools.

  • Tool Invocation: Handles multi-turn reasoning between Claude and external tools, returning structured responses (markdown, code, chart data).

  • FastAPI Server: Provides HTTP API endpoints for frontend integration.

  • .env support: Loads Anthropic API keys and other environment variables from a .env file.

  • CORS Support: Allows flexible frontend/backend development.

Installation

Prerequisites

  • Python 3.10+

  • An MCP server — e.g., mcp-node or compatible Python or Node MCP server.

  • An Anthropic API Key (for Claude).

Clone the Repo

git clone https://github.com/Kiran1689/mcp-proxy-server cd mcp-proxy-server 

Install the Python Dependencies

⚡️ Quickstart Guide

Backend Setup:

  • Clone the repo:
git clone https://github.com/Kiran1689/mcp-proxy-server cd mcp-proxy-server 
Enter fullscreen mode Exit fullscreen mode
  • Install dependencies:
uv sync 
Enter fullscreen mode Exit fullscreen mode
  • Configure your environment: Create a .env file with your Anthropic key:
ANTHROPIC_API_KEY=your_key 
Enter fullscreen mode Exit fullscreen mode
  • Install Algolia MCP server:
    Download the MCP Server from Algolia MCP Releases

  • Authenticate with your Algolia account
    After installing move inside that directory and run the below command to authenticate with your Algolia account.

npm run start-server 
Enter fullscreen mode Exit fullscreen mode
  • Update path In the client.py update path to Algolia mcp server
server_params = StdioServerParameters( command="node", args=[ "--experimental-strip-types", "--no-warnings=ExperimentalWarning", "C:\\Users\\kiran\\Downloads\\mcp-node-0.0.8\\mcp-node-0.0.8\\src\\app.ts" ] ) 
Enter fullscreen mode Exit fullscreen mode
  • Start the backend server: Now start the server by running
uvicorn client_server:app --reload --port 8000 
Enter fullscreen mode Exit fullscreen mode

It will start the server on port 8000 and you can see the list of available tools if, connected successfully.🙂

server

Frontend Setup:

  • Clone the repo:
git clone https://github.com/Kiran1689/algolia-mcp-client cd algolia-mcp-client 
Enter fullscreen mode Exit fullscreen mode
  • Install dependencies:
npm install 
Enter fullscreen mode Exit fullscreen mode
  • Run the application
npm run dev 
Enter fullscreen mode Exit fullscreen mode

Open your browser at http://localhost:5173.

client

Tech Stack

I used:

  • Frontend: React + Vite + Tailwind CSS

  • Backend: FastAPI

  • LLM: Claude Sonnet 4

  • Charts: Chart.js

  • Markdown: react-markdown

How I Utilized the Algolia MCP Server

The Algolia MCP Server is the backbone of this entire project, as the custom MCP client I built is designed specifically to interface with it.

I downloaded the latest release from the GitHub repository and authenticated using my Algolia account. Once connected via my custom Python-based client, the server was ready to fetch and expose all available tools.

Overall, setting up and running the MCP server locally was quick and straightforward.

Key Takeaways

I didn’t want to build just another text-based chatbot, I wanted an interface that feels like a real app, with:

  • Proper text and markdown rendering

  • Support for charts, tables and code blocks.

  • A developer-focused UX

I began by taking UI inspiration from existing platforms like ChatGPT, Claude, and Perplexity, aiming to replicate their clean and intuitive layouts for a familiar user experience. To enhance visual appeal, I added modern UI components and my own styling. I also added custom theme called algolia, inspired by the official Algolia website, to create a more personalized and brand-aligned interface. 🙂

Backend Data Optimization

I wanted to render the MCP Server and LLM responses in a structured format. However, Claude Sonnet 4 was returning the data as plain strings. To address this, I added a system prompt instructing the LLM to format responses using Markdown, fenced code blocks, and structured charts.

Here’s the system prompt:

system_prompt = """You are a helpful assistant. When responding: 1. For regular text, write in markdown format 2. For code examples, use standard markdown code blocks with language specification 3. For charts/graphs, use this EXACT format on a single line: CHART_START:chart_type:{"data":{"labels":[...],"datasets":[...]},"options":{...}}:CHART_END Important: Keep the entire chart JSON on one line between CHART_START and CHART_END markers. Do not show tool call logs or intermediate steps.""" 
Enter fullscreen mode Exit fullscreen mode

Then I wrote a parser to break Claude’s response into structured content blocks (text, code, chart), so I could render it on the frontend.

def parse_response_to_structured(self, response: str) -> List[Dict[str, Any]]: """Parse Claude's response into structured content blocks""" print("=== RAW RESPONSE ===") print(response) print("=== END RAW RESPONSE ===") content_blocks = [] # Regex patterns to find special blocks  chart_pattern = r'CHART_START:(\w+):(.*?):CHART_END' code_pattern = r'``` ```' current_pos = 0 special_blocks = [] # Find charts  for match in re.finditer(chart_pattern, response, re.DOTALL): special_blocks.append({ 'type': 'chart', 'start': match.start(), 'end': match.end(), 'chart_type': match.group(1), 'data': match.group(2).strip() }) # Find code blocks  for match in re.finditer(code_pattern, response, re.DOTALL): special_blocks.append({ 'type': 'code', 'start': match.start(), 'end': match.end(), 'language': match.group(1) or 'text', 'code': match.group(2).strip() }) # Sort by position  special_blocks.sort(key=lambda x: x['start']) # Build structured content  for block in special_blocks: # Add text before this block  if block['start'] > current_pos: text_content = response[current_pos:block['start']].strip() if text_content: # Replace inline backtick code `...` with *...*  text_content = re.sub(r'`([^`]+)`', r'*\1*', text_content) content_blocks.append({ "type": "text", "text": text_content }) # Add the special block  if block['type'] == 'chart': try: chart_data = json.loads(block['data']) content_blocks.append({ "type": "chart", "chartType": block['chart_type'], "data": chart_data.get('data', {}), "options": chart_data.get('options', {}) }) print(f"✅ Successfully parsed chart: {block['chart_type']}") except json.JSONDecodeError as e: print(f"❌ Chart JSON parsing error: {e}") print(f"Problematic JSON: {block['data'][:100]}...") content_blocks.append({ "type": "text", "text": f"[Chart parsing error: {str(e)}]" }) elif block['type'] == 'code': content_blocks.append({ "type": "code", "language": block['language'], "code": block['code'] }) current_pos = block['end'] # Add remaining text  if current_pos < len(response): remaining_text = response[current_pos:].strip() if remaining_text: remaining_text = self.replace_inline_code(remaining_text) content_blocks.append({ "type": "text", "text": remaining_text }) # If no special blocks found, treat entire response as text  if not content_blocks: content_blocks.append({ "type": "text", "text": response }) print(f"📊 Final parsed blocks: {[block['type'] for block in content_blocks]}") return content_blocks 
Enter fullscreen mode Exit fullscreen mode

Eventually, I had clean, structured outputs that were easy to render — including real-time chart visualizations!😉

Extensibility & Future Improvements

  • Database Integration: Easily extend chat storage to a cloud database for multi-device sync.
  • User Authentication: Add login and user management for team collaboration.
  • More Tools & Analytics: Integrate additional Algolia tools or third-party analytics.

🙏 Final Thoughts

This was a fun project that combined everything I enjoy, AI, frontend design, and backend orchestration. It showed me how powerful LLMs can be when they’re paired with good UX and real APIs.

If you're managing an Algolia setup and want a smart, chat-based assistant to help you run operations, this is for you.🙌

Thanks for this opportunity🫶

If you have questions or want to contribute, check out the repo or open an issue.

Top comments (1)

Collapse
 
dev_kiran profile image
Kiran Naragund

I will be adding the video demo soon🙂