Transform how AI assistants work with CSV data. CSV Editor is a high-performance MCP server that gives Claude, ChatGPT, and other AI assistants powerful data manipulation capabilities through simple commands.
AI assistants struggle with complex data operations - they can read files but lack tools for filtering, transforming, analyzing, and validating CSV data efficiently.
CSV Editor bridges this gap by providing AI assistants with 40+ specialized tools for CSV operations, turning them into powerful data analysts that can:
- Clean messy datasets in seconds
- Perform complex statistical analysis
- Validate data quality automatically
- Transform data with natural language commands
- Track all changes with undo/redo capabilities
| Feature | CSV Editor | Traditional Tools |
|---|---|---|
| AI Integration | Native MCP protocol | Manual operations |
| Auto-Save | Automatic with strategies | Manual save required |
| History Tracking | Full undo/redo with snapshots | Limited or none |
| Session Management | Multi-user isolated sessions | Single user |
| Data Validation | Built-in quality scoring | Separate tools needed |
| Performance | Handles GB+ files with chunking | Memory limitations |
# Your AI assistant can now do this: "Load the sales data and remove duplicates" "Filter for Q4 2024 transactions over $10,000" "Calculate correlation between price and quantity" "Fill missing values with the median" "Export as Excel with the analysis" # All with automatic history tracking and undo capability!To install csv-editor for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @santoshray02/csv-editor --client claude# Install uv if needed (one-time setup) curl -LsSf https://astral.sh/uv/install.sh | sh # Clone and run git clone https://github.com/santoshray02/csv-editor.git cd csv-editor uv sync uv run csv-editorClaude Desktop (Click to expand)
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS):
{ "mcpServers": { "csv-editor": { "command": "uv", "args": ["tool", "run", "csv-editor"], "env": { "CSV_MAX_FILE_SIZE": "1073741824" } } } }Other Clients (Continue, Cline, Windsurf, Zed)
See MCP_CONFIG.md for detailed configuration.
# Morning: Load yesterday's data session = load_csv("daily_sales.csv") # Clean: Remove duplicates and fix types remove_duplicates(session_id) change_column_type("date", "datetime") fill_missing_values(strategy="median", columns=["revenue"]) # Analyze: Get insights get_statistics(columns=["revenue", "quantity"]) detect_outliers(method="iqr", threshold=1.5) get_correlation_matrix(min_correlation=0.5) # Report: Export cleaned data export_csv(format="excel", file_path="clean_sales.xlsx")# Extract from multiple sources load_csv_from_url("https://api.example.com/data.csv") # Transform with complex operations filter_rows(conditions=[ {"column": "status", "operator": "==", "value": "active"}, {"column": "amount", "operator": ">", "value": 1000} ]) add_column(name="quarter", formula="Q{(month-1)//3 + 1}") group_by_aggregate(group_by=["quarter"], aggregations={ "amount": ["sum", "mean"], "customer_id": "count" }) # Load to different formats export_csv(format="parquet") # For data warehouse export_csv(format="json") # For API# Validate incoming data validate_schema(schema={ "customer_id": {"type": "integer", "required": True}, "email": {"type": "string", "pattern": r"^[^@]+@[^@]+\.[^@]+$"}, "age": {"type": "integer", "min": 0, "max": 120} }) # Quality scoring quality_report = check_data_quality() # Returns: overall_score, missing_data%, duplicates, outliers # Anomaly detection anomalies = find_anomalies(methods=["statistical", "pattern"])- Load & Export: CSV, JSON, Excel, Parquet, HTML, Markdown
- Transform: Filter, sort, group, pivot, join
- Clean: Remove duplicates, handle missing values, fix types
- Calculate: Add computed columns, aggregations
- Statistics: Descriptive stats, correlations, distributions
- Outliers: IQR, Z-score, custom thresholds
- Profiling: Complete data quality reports
- Validation: Schema checking, quality scoring
- Auto-Save: Never lose work with configurable strategies
- History: Full undo/redo with operation tracking
- Sessions: Multi-user support with isolation
- Performance: Stream processing for large files
Complete Tool List (40+ tools)
load_csv- Load from fileload_csv_from_url- Load from URLload_csv_from_content- Load from stringexport_csv- Export to various formatsget_session_info- Session detailslist_sessions- Active sessionsclose_session- Cleanup
filter_rows- Complex filteringsort_data- Multi-column sortselect_columns- Column selectionrename_columns- Rename columnsadd_column- Add computed columnsremove_columns- Remove columnsupdate_column- Update valueschange_column_type- Type conversionfill_missing_values- Handle nullsremove_duplicates- Deduplicate
get_statistics- Statistical summaryget_column_statistics- Column statsget_correlation_matrix- Correlationsgroup_by_aggregate- Group operationsget_value_counts- Frequency countsdetect_outliers- Find outliersprofile_data- Data profiling
validate_schema- Schema validationcheck_data_quality- Quality metricsfind_anomalies- Anomaly detection
configure_auto_save- Setup auto-saveget_auto_save_status- Check statusundo/redo- Navigate historyget_history- View operationsrestore_to_operation- Time travel
| Variable | Default | Description |
|---|---|---|
CSV_MAX_FILE_SIZE | 1GB | Maximum file size |
CSV_SESSION_TIMEOUT | 3600s | Session timeout |
CSV_CHUNK_SIZE | 10000 | Processing chunk size |
CSV_AUTO_SAVE | true | Enable auto-save |
CSV Editor automatically saves your work with configurable strategies:
- Overwrite (default) - Update original file
- Backup - Create timestamped backups
- Versioned - Maintain version history
- Custom - Save to specified location
# Configure auto-save configure_auto_save( strategy="backup", backup_dir="/backups", max_backups=10 )Alternative Installation Methods
git clone https://github.com/santoshray02/csv-editor.git cd csv-editor pip install -e .pipx install git+https://github.com/santoshray02/csv-editor.git# Install latest version pip install git+https://github.com/santoshray02/csv-editor.git # Or using uv uv pip install git+https://github.com/santoshray02/csv-editor.git # Install specific version pip install git+https://github.com/santoshray02/csv-editor.git@v1.0.1uv run test # Run tests uv run test-cov # With coverage uv run all-checks # Format, lint, type-check, testcsv-editor/ βββ src/csv_editor/ # Core implementation β βββ tools/ # MCP tool implementations β βββ models/ # Data models β βββ server.py # MCP server βββ tests/ # Test suite βββ examples/ # Usage examples βββ docs/ # Documentation We welcome contributions! See CONTRIBUTING.md for guidelines.
- Fork the repository
- Create a feature branch
- Make your changes with tests
- Run
uv run all-checks - Submit a pull request
- SQL query interface
- Real-time collaboration
- Advanced visualizations
- Machine learning integrations
- Cloud storage support
- Performance optimizations for 10GB+ files
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: Wiki
MIT License - see LICENSE file
Built with:
Ready to supercharge your AI's data capabilities? Get started in 2 minutes β