GraphChat
GraphChat is a natural language querying tool integrated into Memgraph Lab, designed to transform how users interact with graph databases. It is designed for non-technical users while still catering to advanced developers.
By using Large Language Models (LLMs), such as OpenAI’s GPT-4, it picks one of the available tools to query your graph, retrieve storage info, and run built-in algorithms like PageRank, delivering precise and actionable results.
How it works
Using GraphChat involves three key steps:
-
Establish an LLM connection: Set up a connection by choosing an LLM provider, selecting a model, and configuring authentication and usage settings. Each connection includes details like the endpoint, headers, retry logic, and context preferences.
-
Start chatting: Once connected, open the chat interface. You can create multiple threads to organize conversations by topic, question type, or model comparison. Each question - answer pair is stored as an exchange, which can be reused as context in future prompts.
-
Let GraphChat handle the rest: GraphChat automatically selects the most appropriate tool - whether it’s a built-in tool, a custom tool, or even a remote MCP server - when generating a Cypher query, running an algorithm, or inspecting metadata. You can review, adjust, or expand any answer to inspect the LLM’s reasoning process and control the prompt context.
From Memgraph 2.16, GraphChat doesn’t require MAGE to be installed. For schema information, GraphChat uses the SHOW SCHEMA INFO
query if available. If the SHOW SCHEMA INFO
query is not enabled, it will try using the schema-related procedures. If none of the above works, it will run Cypher queries.
Setting up an LLM connection
Before using GraphChat, you must have data stored in Memgraph and you need to configure at least one LLM connection. Each connection is defined by:
- Provider configuration
- Model configuration - controls how the model responds
Providers
GraphChat supports connections to the following providers:
- Ollama - Requires a locally deployed Ollama model
- OpenAI
- Azure OpenAI - Requires an Azure OpenAI Service account
- DeepSeek
- Anthropic
- Gemini
Ollama
For local LLM model setup, you can use Ollama by setting up:
- Local endpoint URL, such as
http://localhost:11434
. - Optional custom headers with each request.
If you are having issues connecting to Ollama, try using host.docker.internal
instead of localhost
or 127.0.0.1
. Additional settings may be required if you are using Docker or Docker Compose to run Memgraph and Memgraph Lab.
Learn more about Ollama and how to set it up for local LLM model use:
Ensure you follow the appropriate guidelines and documentation when setting up these connections to take full advantage of the GraphChat capabilities within Memgraph Lab.
OpenAI
Use OpenAI’s models for processing natural language queries. Set up a connection to OpenAI with:
- Valid OpenAI API key.
- Optional proxy endpoint.
- Optional custom headers with each request.
Azure OpenAI
Once you have a model deployment created and ready on Azure OpenAI, you can set up an LLM connection to Azure OpenAI by providing:
- Azure OpenAI service version.
- Azure OpenAI API key.
- Deployment endpoint.
- Deployment name.
- API mode which can be
Responses
(new API) orChat completions
(old API). When you deploy a model to Azure OpenAI, you can see next to your model if it supports Responses or Chat completions API. - Optional proxy endpoint.
- Optional custom headers with each request.
DeepSeek
Set up a connection to DeepSeek with:
- Valid DeepSeek API key.
- Optional proxy endpoint.
- Optional custom headers with each request.
Anthropic
Set up a connection to Anthropic models with:
- Valid Anthropic API key.
- Optional proxy endpoint.
- Optional custom headers with each request.
Gemini
Set up a connection to Google Gemini models with:
- Valid Gemini API key.
- Optional proxy endpoint.
- Optional custom headers with each request.
Model configuration
You can connect to the same provider multiple times using different models, depending on your specific use case. Once the connection is established, the chat interface will display the current conversation and its associated threads on the left.
You can fine-tune how each model behaves by adjusting the following configuration parameters from the chat interface or in the Lab Settings:
- Retry limit
- Temperature
- Max tokens
- TopP
- Frequency penalty
- Presence penalty
Additional settings allow for more control:
- Max previous exchanges – Limit how many prior messages GraphChat includes in each prompt to provide context-aware responses. Including more history can improve continuity, but may increase response costs.
- Tool usage step limit – Set the maximum number of reasoning steps GraphChat can take when using tools to answer a question. More steps enable deeper problem-solving but may increase latency and usage costs.
- LLM permissions - Limit the changes GraphChat can make to your database.
- System instructions – Define the assistant’s role, tone, and behavior. These instructions are always included at the start of the prompt.
- System additional context – Select predefined modules (graph schema, Cypher query specifics, various constraints) to enrich the assistant’s context. These are appended to the system instructions.
You can also create multiple configurations for the same model to suit different use cases.
Chat interface
GraphChat lets you create multiple threads to keep conversations organized—whether you’re exploring different topics or comparing results from various models.
Each question–answer pair is called an exchange. GraphChat can include previous exchanges in the prompt context to make responses more relevant. You have control over how that context is built:
- Default: The last five messages are included automatically.
- Customizable: In the model configuration, you can adjust the number of previous exchanges or manually exclude specific ones.
When asking a question, GraphChat shows how many previous exchanges will be used. To adjust this:
- Exclude specific exchanges using the thumbs down icon.
- Update the max previous exchanges parameter in the model configuration.
Coming soon: You’ll be able to manually select or deselect specific previous exchanges directly from the conversation view for even more customization.
To generate responses, GraphChat leverages:
- Prompt context - GraphChat constructs a detailed prompt that defines the assistant’s role, tone, and permissions, optionally including schema and Cypher-specific guidance to ensure accurate and context-aware responses.
- Tools - A collection of built-in and custom Cypher-backed tools that let the LLM query data, analyze graphs, and interact directly with the Memgraph database.
- MCP servers - External tool integrations that expand GraphChat’s capabilities by connecting to third-party or custom MCP servers through configurable connections.
- Exploring exchanges - Lets you inspect the LLM’s reasoning process, view which tools were used, and examine the full context and schema involved in generating each response. gives you deeper insight into the LLM’s reasoning process.
Prompt context
When you ask a question, GraphChat constructs a prompt context for the LLM that includes:
- Introduction - Define the assistant’s role, tone, and behavior. These instructions are always included at the start of the prompt. You can edit the default settings by adding new rules or completely redefining the role, tone, and behavior of the assistant.
- Graph schema (Optional) - If selected, Lab ensures that each LLM interaction has access to the graph schema. Without it, the LLM will attempt to infer the schema on its own.
- Query permissions (Optional) - If enabled, Lab updates the prompt context with query constraints, specifying whether the assistant can read, update, insert, and/or delete data in Memgraph.
- Cypher-specific notes (Optional) - Provides rules and guidance where Memgraph’s Cypher syntax differs from other Cypher-based databases.
Note: Large graph schemas can consume significant tokens in the LLM’s context window. In such cases, consider disabling automatic inclusion of the graph schema to optimize cost and performance.
Tools
GraphChat includes a set of built-in tools and supports creating custom Cypher-backed tools.
Built-in tools
Built-in tools cover a variety of tasks such as querying data, retrieving database information, running graph algorithms, checking indexes, managing triggers, and more:
run-cypher-query
: Generate and execute a Cypher query.run-page-rank
: Identify the most connected and impactful node using PageRank.run-betweenness-centrality
: Determine which nodes serve as critical bridges in the graph.show-config
: List all Memgraph database configuration parameters.show-schema-info
: Display the full database schema (requires--schema-info-enabled
flag on startup).show-storage-info
: View memory usage, available memory, disk usage, and counts of nodes and relationships.show-indexes
: List all database indexes.show-constraints
: List all defined constraints.show-triggers
: List all active triggers.
Tool usage before Lab 3.3
In earlier versions, GraphChat only used the run-cypher-query
tool. This tool:
- Generates and runs Cypher queries from LLM prompts
- Automatically retries invalid queries up to the retry limit defined in the model configuration
If you want to replicate this behavior, disable all other tools in the configuration.
Cypher-backed tool
From Lab 3.3 onward, you can also define your own custom tools by providing:
- A tool name
- A description
- A parameterized Cypher query to execute on call
Make sure each custom tool has a unique name and clear description so the LLM can accurately select the appropriate tool when responding.
MCP servers
Starting with Lab 3.6, you can integrate tools from MCP servers, where Lab acts as an MCP client. This greatly expands the capabilities of the assistant in GraphChat, allowing it to leverage external tools beyond the built-in and custom ones.
Setting up the connection
In Settings/LLM tab, you can create and manage MCP connections. Lab currently supports a streamable HTTP transport layer. To set up a connection:
- Enter the URL where your MCP server is listening.
- (Optional) Add an access token - it will be sent as an Authorization header.
- (Optional) Add any custom headers required by your MCP server.
Upon a successful connection, Lab retrieves a list of tools provided by the MCP server. You can then select which of these tools you want to enable in GraphChat.
GraphChat automatically resyncs tools and MCP connections each time you open a new GraphChat view. If there’s an issue with a connection, the affected tools will be unavailable, and you’ll see the error response that explains why the connection failed - helping you troubleshoot and resolve the issue.
Managing MCP tools in GraphChat
Once an MCP connection is set up in Lab, you can manage its tools directly within GraphChat. When you open the tools list from the input box in GraphChat, you’ll see:
- All your active MCP servers
- The tools each server provides
You can choose which tools to enable for the current GraphChat conversation. Additionally, you can quickly enable or disable all tools from a specific MCP server with a single action - making it easy to tailor the toolset to your workflow.
Examples
Tavily Integration:
- Enter
https://mcp.tavily.com/mcp/
in the URL field. - Register on Tavily to obtain an access token and add it in the Access Token field.
Memgraph MCP Integration:
- Pull the ai-toolkit/memgraph repository.
- Build the provided Dockerfile to start the MCP server.
- Enter
https://localhost:8000/mcp/
in the URL field.
Explore exchanges
After receiving an answer, you can expand it to inspect what the LLM did behind the scenes. If a tool was used, you’ll see which tool was called and the raw response from the Assistant.
You can also explore:
- Number of previous exchanges, tokens, and tools used
- Schema used – Click to visualize the included schema if schema was generated for the prompt by the Lab
- Context used – Click to view the full prompt context sent to the LLM