Docs Menu
Docs Home
/ /

MongoDB MCP Server Usage Examples

This page demonstrates how to use the MongoDB MCP Server through practical examples with natural language prompts. Each example shows the prompt you can enter in your AI client and an example response you might receive from the large language model (LLM).

You can use the examples on this page as starting points for your own interactions with the MongoDB MCP Server.

Note

The exact output you receive will vary depending on your data, AI client, and the LLM you're using. Private information such as organization IDs, project IDs, and passwords are redacted in these examples.

Before running these examples, ensure that you have the following:

  • A working MongoDB MCP Server configuration

  • Access to a MongoDB deployment with data to explore.

  • Sample data loaded into your deployment.

The following examples demonstrate how to manage your Atlas infrastructure by using the MongoDB MCP server.

Get an overview of your Atlas account structure and available resources.

Show my Atlas organizations and projects
Ran ``atlas-list-orgs``
Ran ``atlas-list-projects``
Here are your MongoDB Atlas organizations and projects:
**Organizations:**
TestOrganization
Organization ID: XXX
**Projects:**
TestProject
Project ID: XXX
Organization: TestOrganization
Created: August 7, 2020 at 11:48:55 AM
You have one organization containing one project. This
project contains your TestCluster that we're currently
connected to.

Set up a new Atlas project with a cluster and all necessary access configurations in a single workflow.

Create a new Atlas project named myNewProject and create a
free cluster named myNewCluster in the new project, add
access list for my current IP, create a database user named
myNewUser with read and write access to the new cluster, and
return the connection string for the new cluster
I'll create a new Atlas project, free cluster, and set up
all the necessary access details for you.
Ran ``atlas-create-project``
Great! I've created a new project named "myNewProject"
in your MongoDB Atlas organization. Now let's list the
projects to get the project ID for the new project:
Ran ``atlas-list-projects``
Now I'll create a free cluster named "myNewCluster" in the
new project:
Ran ``atlas-create-free-cluster``
Now, I'll add your current IP address to the access list:
Ran ``atlas-create-access-list``
Now let's create a database user named "myNewUser" with read
and write access:
Ran ``atlas-create-db-user``
Perfect! Everything has been set up successfully:
**New Project Information:**
Project Name: myNewProject
Project ID: XXX
**New Cluster Information:**
Cluster Name: myNewCluster
Region: US East (US_EAST_1)
**Database User:**
Username: myNewUser
Password: XXX
Access: Read and write to any database
**Connection String:**
mongodb+srv://myNewUser:XXX@myNewCluster.mongodb.net/
Your current IP address has been added to the access list,
so you should be able to connect to the cluster from your
current location. The cluster will take a few minutes to
fully provision and be ready for connections.

The following examples demonstrate how you can use the MongoDB MCP Server to explore and understand your MongoDB data.

Discover what databases are available in your cluster and get insights about their contents.

Show my Atlas databases
Ran ``list-databases``
Here are the databases in your Atlas cluster "TestCluster":
**Sample Databases:**
sample_mflix - 138.4 MB (Movie database with films, theaters, users, comments)
sample_training - 53.6 MB (Training data for learning MongoDB)
sample_airbnb - 55.3 MB (Airbnb listings and reviews data)
sample_analytics - 10.0 MB (Analytics and customer data)
sample_restaurants - 6.8 MB (Restaurant and review data)
sample_weatherdata - 2.7 MB (Weather observation data)
sample_geospatial - 1.3 MB (Geospatial data examples)
sample_supplies - 1.1 MB (Supply chain data)
sample_guides - 41 KB (Getting started guides)
**System Databases:**
local - 4.1 GB (MongoDB internal database)
admin - 365 KB (Administrative database)
Your cluster contains MongoDB's sample datasets, which are
great for learning and testing. The largest databases are
sample_mflix (movies), sample_training, and sample_airbnb.
Would you like me to explore any specific database or show
you the collections within one of them?

Dive deeper into a specific database to understand its structure and collections.

Show the collections in the Atlas sample_mflix database
Ran ``list-collections``
Here are the collections in the sample_mflix database:
**Core Collections:**
movies - Main collection containing movie data
(titles, genres, cast, ratings, etc.)
users - User profiles and account information
comments - User comments and reviews on movies
theaters - Movie theater locations and details
sessions - User session data
**Additional Collections:**
embedded_movies - Movies collection with embedded/denormalized data structure
The movies collection contains detailed information
about films including plot, cast, directors, ratings,
release dates, and genres. The comments collection links to
movies and users to store user reviews and discussions.
Would you like me to explore any specific collection in more
detail, such as showing sample documents or collection
statistics?

Analyize your MongoDB data to identify patterns and trends.

Summarize the Atlas movies collection by thrillers
I'll help you analyze the Atlas movies collection focusing
on thrillers. First, let me check what databases are
available in your connected cluster.
Ran ``list-databases``
Perfect! I can see you have the sample_mflix database which
contains the movies collection. Let me check the collections
in that database:
Ran ``list-collections``
Great! Now let me analyze the movies collection to find
thrillers and provide a summary. First, let me check the
schema to understand how genres are stored:
Ran ``collection-schema``
Perfect! I can see that genres are stored as an array of
strings. Now let me get a count of thriller movies and then
perform an aggregation to summarize them:
Ran ``count``
Now let me create a comprehensive summary of thriller movies
using aggregation:
Ran ``aggregate``
I see there are some data quality issues. Let me get better
data by filtering for movies with numeric ratings:
Ran ``aggregate``
Now let me get some more summary statistics by decade:
Ran ``aggregate``
Here's a comprehensive analysis of thriller movies in your
Atlas sample_mflix collection:
**Overall Statistics**
Total Thriller Movies: 2,454 films
Time Span: 1924 to 2011
Average Rating: 6.3/10 (IMDb)
Average Runtime: 105 minutes
Total IMDb Votes: 85.8 million across all thrillers
**Top-Rated Thriller Movies**
Drishyam (2013) - 8.9/10 - Indian thriller directed by Jeethu Joseph
Tinker Tailor Soldier Spy (1979) - 8.7/10 - British spy thriller
Rear Window (1954) - 8.6/10 - Alfred Hitchcock classic
The Silence of the Lambs (1991) - 8.6/10 - Psychological thriller
Se7en (1995) - 8.6/10 - David Fincher crime thriller
The Usual Suspects (1995) - 8.6/10 - Neo-noir mystery
Léon: The Professional (1994) - 8.6/10 - Action thriller
Psycho (1960) - 8.6/10 - Another Hitchcock masterpiece
Memento (2000) - 8.5/10 - Christopher Nolan psychological thriller
**Trends by Decade**
1920s-1950s: Few thrillers but high quality (avg 7.2-7.8 rating)
1960s-1970s: Growing popularity, still maintaining quality
1980s: Significant growth (200 films), quality decline begins
1990s: Peak production era (417 films), avg 6.3 rating
2000s: Highest volume (854 films), avg 6.24 rating
2010s: 730 films, lowest average rating (6.12)
The data shows that while thriller production increased
dramatically from the 1980s onward, the average quality (as
measured by IMDb ratings) has generally declined, suggesting
the genre became more commercialized with higher volume but
potentially lower artistic standards in recent decades.

The following examples demonstrate how to use the MongoDB MCP Server to perform common database operations.

Add new data to your collections.

Add a document to the movies_copy collection
I'll add a new document to the movies_copy collection. Let
me create a sample movie document with the same structure as
the existing movies:
Ran ``insert-many``
Perfect! I've successfully added a new movie document to the
movies_copy collection. Here's what I added:
**New Movie Added:**
Title: "The AI Thriller"
Genres: Thriller, Sci-Fi, Drama
Year: 2024
Runtime: 118 minutes
IMDb Rating: 7.8/10 (125,000 votes)
Cast: John Smith, Sarah Johnson, Michael Chen
Director: Alex Rodriguez
Rating: PG-13
The document was inserted with ID: 684b15fb5084ddef1f265e8f
Let me verify the collection now has one additional
document:
Ran ``count``
Confirmed! The movies_copy collection now contains 21,350
documents (up from 21,349), including your new thriller
movie about AI consciousness.

Create and customize new collections.

Create a new collection to store movie purchases data that
includes geospatial and timeseries fields
I'll create a new collection for storing movie purchases
data with geospatial and timeseries fields. Let me first
create the collection and then add some sample documents
with the appropriate structure.
Ran ``create-collection``
Now I'll insert sample documents that include geospatial
data (customer location, theater location) and timeseries
data (purchase timestamp, viewing timestamp):
Ran ``insert-many``
Perfect! I've created the movie_purchases collection with
sample data that includes both geospatial and timeseries
fields. Here's what the collection contains:
**Collection Structure**
**Geospatial Fields:**
customer_location: GeoJSON Point with customer's coordinates
theater_location: GeoJSON Point with theater coordinates
customer_address: Human-readable address information
**Timeseries Fields:**
purchase_timestamp: When the purchase was made
viewing_timestamp: When the movie was actually watched
viewing_duration_minutes: How long the customer watched
completion_percentage: Percentage of movie completed
**Sample Data Added:**
Purchase 1: Customer in San Francisco bought "The Shawshank Redemption" rental for $12.99
Purchase 2: Customer in New York purchased "Pulp Fiction" for $15.99
Purchase 3: Customer in Los Angeles rented "Se7en" for $9.99
Now let me create a geospatial index to optimize
location-based queries:
Ran ``create-index``
Excellent! The movie_purchases collection is now ready with:
- Geospatial capabilities - 2dsphere index for location-based queries
- Timeseries optimization - Index on purchase timestamps
- Sample data - 3 purchase records across different US cities
This collection can now support queries like:
- Finding purchases within a certain radius of a location
- Analyzing purchase patterns over time
- Correlating viewing behavior with geographic regions
- Time-based analytics and trending

Create a backup or duplicate of an existing collection.

Make a copy of the movies collection named movies_copy
I'll create a copy of the movies collection named
movies_copy. I'll use an aggregation pipeline with $out to
copy all documents from the movies collection to a new
movies_copy collection.
Ran ``aggregate``
Perfect! I've successfully created a copy of the movies
collection named movies_copy. Let me verify the copy was
created correctly by checking the document count:
Ran ``count``
Found 21349 documents in the collection "movies_copy"
Excellent! The movies_copy collection has been successfully
created with all 21,349 documents from the original movies
collection. The copy operation completed successfully and
you now have an identical backup of your movies collection.

The following example demonstrates how to export query results for sharing or further processing in external tools.

Generate comprehensive reports and export them for use in other applications or for sharing with team members.

The exported results are saved to a file on the computer that runs the MCP Server. You can also access the exported data through the exported-data resource using the AI client application.

The export file is stored in the file system path specified by the exportPath configuration option. The export file is eligible for deletion after the time period specified by the exportTimeout configuration option. For additional details, see Export Data from MongoDB MCP Server.

Summarize and export the contents of the movies collection.
I'll help you summarize and export the contents of the movies
collection from your connected MongoDB Atlas cluster. Let's do
this in two steps:
1. First, I'll create a summary of the movies collection using an
aggregation pipeline.
2. Then I'll export the results using the export tool.
Let's start by getting a summary of the movies collection:
Ran ``aggregate``
Now that we have a summary of the movies collection, let's create
a more focused dataset to export. Let's get the top 10
highest-rated movies:
Ran ``aggregate``
Now, let's export this data using the MCP Server export tool:
Ran ``export``
I've successfully summarized and exported the contents of the
movies collection from your MongoDB Atlas cluster. Here's a
breakdown of what I did:
**Movies Collection Summary**
**Collection Overview:**
The collection contains 21,349 movie documents
Time span: 1896 to 2016 (120 years of cinema history)
Average IMDb rating: 6.66 out of 10
Average runtime: ~104 minutes
**Top Genres:**
Drama (12,385 movies)
Comedy (6,532 movies)
Romance (3,318 movies)
Crime (2,457 movies)
Thriller (2,454 movies)
**Top Directors:**
Woody Allen (40 movies)
Martin Scorsese (32 movies)
Takashi Miike (31 movies)
Steven Spielberg (29 movies)
John Ford (29 movies)
**Top Languages:**
English (14,639 movies)
French (2,405 movies)
Spanish (1,600 movies)
German (1,377 movies)
Italian (1,228 movies)
**Highest Rated Movies:**
Band of Brothers (2001) - 9.6/10
Planet Earth (2006) - 9.5/10
The Civil War (1990) - 9.4/10
A Brave Heart: The Lizzie Velasquez Story (2015) - 9.4/10
Cosmos (1980) - 9.3/10
**Export Details**
The data has been exported to a JSON file in
relaxed format. You can access the exported file at:
/Users/XXX/.mongodb/mongodb-mcp/exports/68b8c10a6d0291128800be0b/sample_mflix.movies.68b8c2496d0291128800be0d.json
The export includes:
- Genre summary with counts and average ratings
- Runtime statistics (min, max, average)
- Year-by-decade summary with counts and ratings
- Top directors with film counts
- Language distribution
- Top 10 highest-rated movies
This export provides a comprehensive overview of the movies
collection that can be easily shared or imported into other tools
for further analysis or visualization.

The following examples demonstrate how you can use the MongoDB MCP Server to identify and resolve performance issues. The actual output for your prompts will depend on whether your cluster is experiencing performance issues, so your output might not match the examples exactly.

These examples use the atlas-list-performance-advisor tool to retrieve recommendations from the Performance Advisor.

Identify and analyze slow-performing queries to understand performance bottlenecks.

Note

When performing slow query analysis, the MongoDB MCP Server retrieves a sample of slow queries, capped at 50 queries. The sample includes up to 50 most recent slow queries that match any specified conditions in your prompt to ensure optimal performance and response times.

Query for all slow queries in the cluster:

Show me slow queries in my cluster
Ran ``atlas-list-performance-advisor``
I found extensive slow query logs for ClusterM10. Here's a summary of the slow queries detected:
**Query Types and Patterns**
**Most Common Slow Queries:**
**Collection Statistics Queries ($collStats)**
- **Collections**: SchemaAdvisor.Call to Action: Remove unnecessary indexes and SchemaAdvisor.Call to Action: Remove unnecessary indexes 2
- **Duration**: 30-100+ milliseconds
- **Pattern**: These are MongoDB Automation Agent queries collecting storage statistics
- **Frequency**: Very frequent (every 10 minutes)
**Collection Statistics for mcp_db.new**
- **Duration**: 30-60+ milliseconds
- **Pattern**: Similar $collStats operations
**Find Query on sample_db.test**
- **Query**: {"find": "dbC", "filter": {"z": 10}}
- **Duration**: 5 milliseconds
- **Plan**: COLLSCAN (Collection Scan)
- **Performance Issue**: Examined 10,001 documents but only returned 1
- This matches the index suggestion we saw earlier!
**Key Performance Issues**
**Inefficient Query on sample_db.test:**
- The find query {"z": 10} is doing a full collection scan
- This directly relates to the index suggestion we found: creating an index on field z would significantly improve this query
- Currently examining 10,001 documents to find 1 result
**Automation Agent Overhead:**
- Many slow queries are from MongoDB Automation Agent collecting statistics
- These are system-level operations that run regularly
- Duration typically 30-100ms, which is acceptable for system operations
**Recommendations**
**Implement the Suggested Index:**
- Create an index on {"z": 1} for the sample_db.test collection
- This will eliminate the collection scan and dramatically improve the find query performance
**Monitor Automation Agent Performance:**
- The frequent $collStats operations are normal but worth monitoring
- Consider if the collection statistics collection frequency can be optimized
The most actionable finding is the inefficient query on sample_db.test that would benefit from the index suggestion we identified earlier.

Query for general performance issues:

How is my cluster performance?
Ran ``atlas-list-performance-advisor``
Based on the performance advisor data I retrieved for ClusterM10, here's an assessment of your cluster performance:
**Overall Performance Assessment: Good with Room for Optimization**
**✅ Positive Indicators**
- **Query Response Times**: Most queries are completing in reasonable timeframes (30-100ms)
- **No Critical Issues**: No extremely slow queries (>1000ms) detected
- **Stable Operations**: Consistent performance patterns over time
- **System Health**: MongoDB Automation Agent operations are functioning normally
**⚠️ Areas for Improvement**
**Inefficient Query Pattern:**
- **Issue**: Find query {"z": 10} on sample_db.test collection
- **Current**: Full collection scan examining 10,001 documents
- **Impact**: Unnecessary resource consumption
- **Solution**: Implement suggested index on field z
**Resource Utilization:**
- **CPU Usage**: Queries consuming 5-45+ CPU nanoseconds
- **Memory**: Response sizes up to 602KB for statistics queries
- **I/O**: Some queries showing storage read operations
**📊 Performance Metrics Summary**
- **Slow Query Count**: 50+ entries in recent logs
- **Average Query Duration**: 30-100ms (acceptable range)
- **Most Common Operations**: Collection statistics gathering
- **Index Optimization Opportunity**: 1 significant improvement identified
**🎯 Immediate Recommendations**
**High Priority**: Create index {"z": 1} on sample_db.test collection
- **Expected Impact**: Eliminate collection scan, reduce query time from 5ms to <1ms
- **Resource Savings**: Reduce CPU and I/O usage significantly
**Medium Priority**: Monitor automation agent performance
- Current 30-100ms for $collStats operations is acceptable
- Consider optimizing if frequency becomes problematic
**📈 Performance Grade: B+**
Your cluster is performing well overall, but implementing the suggested index would elevate it to an A-grade performance level. The main bottleneck is the inefficient query pattern that can be easily resolved with proper indexing.
Would you like me to help you implement the suggested index or dive deeper into any specific performance metrics?

Query for a specific operation type, duration, or namespace:

Show me slow writes for the past 24 hours in the movies collection
Ran ``atlas-list-performance-advisor``
Based on the slow query logs from the sample_db.test collection, I can see that there are no slow write operations in this collection over the past 24 hours.
**Analysis Summary**
**Slow Query Logs Found in sample_db.test:**
- **1 slow query detected** - this is a READ operation (find query)
- **No write operations found** in the slow query logs
- Query occurred on 2025-09-30 at 14:52:56 UTC
**Detailed Breakdown of the Slow Query**
**Query Details:**
.. code-block:: javascript
db.dbC.find({ "z": 10 })
**Performance Characteristics:**
- **Operation Type**: find (READ operation)
- **Collection Scan**: ``"planSummary": "COLLSCAN"``
- **Documents Examined**: 10,001 documents
- **Keys Examined**: 0 (no index usage)
- **Duration**: 6 milliseconds
- **Results Returned**: 1 document
- **Planning Time**: 1.78ms
- **Mirrored Query**: Yes (read preference: secondaryPreferred)
**Performance Issues:**
- **Missing Index**: The query on field ``z`` is performing a collection scan
- **Inefficient**: Examining 10,001 documents to find 1 result
- **Read Preference**: Using secondaryPreferred (mirrored query)
**Key Findings**
✅ **Write Performance**: No slow write operations detected
⚠️ **Read Performance**: One slow read operation due to missing index

Query for a specific execution time:

Show me queries longer than 5 seconds
Ran ``atlas-list-performance-advisor``
Based on the slow query logs from your MongoDB Atlas cluster, I can see that there are no queries longer than 5 seconds in your cluster performance data.
**Analysis Summary**
**Query Duration Analysis:**
- All slow queries are under 5 seconds - the longest queries are around 117ms (0.117 seconds)
- Most queries are between 30-100ms duration
- No queries exceed 5 seconds (5000ms)
**Detailed Breakdown of Slow Queries**
The slow query logs show primarily MongoDB Automation Agent queries with the following characteristics:
**Query Types Found:**
**SchemaAdvisor Collections** - MongoDB's automated performance monitoring queries
- **Duration**: 30-117ms
- **Operations**: ``$collStats`` aggregation queries
- **Purpose**: Automated performance analysis
**smol.woofer Collection** - User application queries
- **Duration**: 33ms
- **Operations**: Aggregation with ``$match`` and ``$lookup``
- **Issue**: Collection scan (no index usage)
**mcp_db.new Collection** - Storage statistics queries
- **Duration**: 33-69ms
- **Operations**: ``$collStats`` aggregation
**sample_db.test Collection** - Find queries
- **Duration**: 6ms
- **Operations**: find with collection scan
**Performance Status**: ✅ Good
Your cluster is performing well with:
- No queries longer than 5 seconds
- All slow queries are under 200ms
- Most queries are under 100ms
The slow queries are primarily from MongoDB's automated monitoring systems, which is normal and expected behavior.

Use the Performance Advisor to suggest indexes to create or drop to improve performance.

What indexes should I create or drop to improve performance?
Ran ``atlas-list-performance-advisor``
Based on the performance advisor analysis of your MongoDB Atlas cluster "ClusterM10", here are the specific index recommendations:
**Index to CREATE:**
**High Priority - Create Index on sample_db.test collection:**
- **Index**: { "z": 1 }
- **Weight**: 1,020,000 bytes (≈ 1.02 MB saved per query)
- **Impact**: This index will significantly improve performance for queries filtering on the z field
- **Reason**: The slow query logs show a query {"z": 10} that performed a collection scan (COLLSCAN) and examined 10,001 documents to return just 1 result
**Indexes to DROP:**
**No Drop Recommendations Found**
The performance advisor found no indexes that should be dropped at this time.
**Additional Performance Insights:**
**Schema Recommendations:**
- **Remove Unnecessary Indexes**: You have collections with more than 30 indexes, which can impact write performance
- **Reduce Document Size**: Some documents exceed 2MB, which can cause performance issues
- **Avoid Unbounded Arrays**: Some collections have arrays with over 10,000 entries
**Slow Query Analysis:**
The slow query logs show that most slow queries are from MongoDB's internal automation agent running $collStats operations, which is normal. However, there's one user query that would benefit significantly from the suggested index:
- **Query**: db.dbC.find({"z": 10})
- **Current Performance**: Collection scan examining 10,001 documents
- **Expected Improvement**: With the suggested index, this should become an index scan examining only the matching documents
**Recommended Action:**
Create the index { "z": 1 } on the sample_db.test collection. This single index creation could save approximately 1MB of disk reads per query execution for queries filtering on the z field.

Get recommendations for improving your database schema design and structure.

Suggest schema improvements for my database
Ran ``atlas-list-performance-advisor``
Based on the performance advisor analysis of your MongoDB Atlas cluster, here are the specific schema improvements recommended:
**Critical Schema Issues Found:**
**1. Remove Unnecessary Indexes**
**Collections Affected:**
- SchemaAdvisor.Call to Action: Remove unnecessary indexes
- SchemaAdvisor.Call to Action: Remove unnecessary indexes 2
**Issue**: Both collections have more than 30 indexes each
**Impact:**
- Slower write operations
- Increased storage overhead
- Longer index maintenance time
- Reduced memory efficiency
**Recommendation:**
- Audit all indexes in these collections
- Remove indexes that are not actively used by your queries
- Keep only indexes that support your application's query patterns
- Consider compound indexes instead of multiple single-field indexes
**2. Reduce Document Size**
**Collections Affected:**
- SchemaAdvisor.Call to Action: Reduce the size of documents
- SchemaAdvisor.Call to Action: Reduce the size of documents 2
**Issue**: Documents larger than 2MB found
**Impact:**
- Documents cannot be stored in MongoDB (16MB limit)
- Performance degradation for large documents
- Increased memory usage
- Slower network transfer
**Recommendations:**
- Split large documents into smaller, related documents
- Use GridFS for storing large binary data
- Normalize data by moving large arrays or objects to separate collections
- Archive old data to reduce document size
- Compress data where appropriate
**3. Avoid Unbounded Arrays**
**Collections Affected:**
- SchemaAdvisor.Call to Action: Avoid using unbounded arrays in documents
**Issue**: Arrays with over 10,000 entries detected
**Impact:**
- Poor query performance on array elements
- Memory consumption issues
- Index inefficiency
- Difficult to maintain and update
**Recommendations:**
- Move large arrays to separate collections with references
- Implement pagination for array access
- Use embedded documents with limits (e.g., max 100 items)
- Consider denormalization for frequently accessed array elements

For more information about configuring and using the MongoDB MCP Server:

Learn about all available configuration options and connection methods.

Explore the complete list of tools and their capabilities.

Understand how to securely deploy and use the MCP Server.

Back

Security Best Practices

On this page