Python ollama

Open-source Python projects categorized as ollama
LLM AI Python openai mcp

Top 23 Python ollama Projects

  1. ragflow

    RAGFlow is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs

    Project mention: The AI-Native GraphDB + GraphRAG + Graph Memory Landscape & Market Catalog | dev.to | 2025-10-26
  2. InfluxDB

    InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.

    InfluxDB logo
  3. ChuanhuChatGPT

    GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.

  4. AstrBot

    ✨ Agentic IM ChatBot Infrastructure — 聊天智能体基础设施 ✨ 多消息平台集成(QQ / Telegram / 企微 / 飞书 / 钉钉等),强大易用的插件系统,支持 OpenAI / Gemini / Anthropic / Dify / Coze / 阿里云百炼 / 知识库 / Agent 智能体

    Project mention: AstrBot: Revolutionizing Chatbot Development with Ease and Flexibility | dev.to | 2025-03-26

    View the Project on GitHub

  5. shell_gpt

    A command-line productivity tool powered by AI large language models like GPT-4, will help you accomplish your tasks faster and more efficiently.

    Project mention: Supercharge Your Terminal: ShellGPT + ChromaDB + LangChain for Context-Aware Automation | dev.to | 2025-09-01

    🗃 To explore ShellGPT in depth, including installation instructions, usage examples, and advanced configuration options, head over to the official ShellGPT GitHub repository.

  6. ollama-python

    Ollama Python library

    Project mention: A Beginner's Guide to Ollama Cloud Models | dev.to | 2025-10-19

    Code Examples

  7. agentops

    Python SDK for AI agent monitoring, LLM cost tracking, benchmarking, and more. Integrates with most LLMs and agent frameworks including CrewAI, Agno, OpenAI Agents SDK, Langchain, Autogen, AG2, and CamelAI

    Project mention: Tune self-correct SQL agent with RL: AgentLightning+verl+vLLM+AgentOps+LangGraph | news.ycombinator.com | 2025-08-11

    - AgentOps for collecting training data (telemetry): https://github.com/AgentOps-AI/agentops

  8. LEANN

    RAG on Everything with LEANN. Enjoy 97% storage savings while running a fast, accurate, and 100% private RAG application on your personal device.

    Project mention: First lightweight local semantic search MCP for Claude Code | news.ycombinator.com | 2025-08-15

    @Berkeley SkyLab, we’re the first to bring semantic search to Claude Code with a fully local index in a novel, lightweight structure — check it out at LEANN(https://github.com/yichuan-w/LEANN).

  9. Stream

    Stream - Scalable APIs for Chat, Feeds, Moderation, & Video. Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.

    Stream logo
  10. Kiln

    Easily build AI systems with Evals, RAG, Agents, fine-tuning, synthetic data, and more.

    Project mention: DeepFabric – Generate High-Quality Synthetic Datasets at Scale | news.ycombinator.com | 2025-09-26
  11. sdk-python

    A model-driven approach to building AI agents in just a few lines of code.

    Project mention: Amazon Bedrock AgentCore Gateway - Part 5 Add API Gateway REST API as a target for Amazon Bedrock AgentCore Gateway | dev.to | 2025-12-22

    In the part 1 of this article series, we introduced Amazon Bedrock AgentCore and specifically Amazon Bedrock AgentCore Gateway which transforms existing APIs and AWS Lambda functions into agent-ready tools, offering unified access across protocols, including into Model Context Protocol (MCP), and runtime discovery. In the part 2 of the series, we used Amazon Bedrock AgentCore Gateway to convert the existing Amazon API Gateway REST API into MCP compatible tools and made it available to agents through Gateway endpoint. We also used Strands MCP Client to talk to this AgentCore Gateway endpoint. In the example we extracted Open API spec to transform the existing Amazon Gateway API REST API into MCP tools using Bedrock AgentCore Gateway, because AgentCore Gateway didn't support directly creating Amazon API Gateway REST API as target as of time of writing (only AWS Lambda was supported as an AWS internal service). This has now changed and this feature is supported:

  12. Devon

    Devon: An open-source pair programmer

  13. elia

    A snappy, keyboard-centric terminal user interface for interacting with large language models. Chat with ChatGPT, Claude, Llama 3, Phi 3, Mistral, Gemma and more.

  14. oterm

    the terminal client for Ollama

  15. comfyui_LLM_party

    LLM Agent Framework in ComfyUI includes MCP sever, Omost,GPT-sovits, ChatTTS,GOT-OCR2.0, and FLUX prompt nodes,access to Feishu,discord,and adapts to all llms with similar openai / aisuite interfaces, such as o1,ollama, gemini, grok, qwen, GLM, deepseek, kimi,doubao. Adapted to local llms, vlm, gguf such as llama-3.3 Janus-Pro, Linkage graphRAG

  16. openai-edge-tts

    Free, high-quality text-to-speech API endpoint to replace OpenAI, Azure, or ElevenLabs

    Project mention: Open source TTS by Resemble (claiming they are sota) | news.ycombinator.com | 2025-06-11

    It can definitely run on CPU — but I'm not sure if it can run on a machine without a GPU _entirely_.

    To be honest, it uses a decently large amount of resources. If you had a GPU, you could expect about 4-5 gb memory usage. And given the optimizations for tensors on GPUs, I'm not sure how well thinks would work "CPU only".

    If you try it, let me know. There are some "CPU" Docker builds in the repo you could look at for guidance.

    If you want free TTS without using local resources, you could try edge-tts https://github.com/travisvn/openai-edge-tts

  17. Alpaca

    🦙 Local and online AI hub (by Jeffser)

    Project mention: Ask HN: Recommendations for Running LLMs Locally | news.ycombinator.com | 2025-02-02

    Alpaca is in the Pop!_shop (what is called the store on most other distros) so it's just a matter of clicking install. Inside it it has a list of several models, I downloaded a small model, but even a small model is very big, 4 Gigs. Some are over 100GB. When one is downloaded just click on new chat.

    https://github.com/Jeffser/Alpaca

    GPT4ALL on the cinnamon mint box I first installed Easy Flatpak then installed it from there. That I had a little different goal, I had a lot of pdf and epub books I wanted to index. I downloaded a small model again, a 4.34G Llama 3 8B Instruct. In the local docs I put one pdf 'Github for Dummies' and asked it to index. It took about 30 minutes IIRC. (don't start with too many, it could be days indexing. You can add docs later and re-index so you can build your database slowly) Then instead of looking up in the book how to revert a commit, you chat with the model and ask it how to revert a commit.

    My take is the model is the human language interface for queries, and that's all. Bear in mind almost everyone knows more about this me, I just wanted to see what all the fuss was about.

    You can also do similar what I did with gpt4all online for free as long as you have a google account. Go to notebooklm dot google dot com and start uploading your documents.

  18. promptmap

    a security scanner for custom LLM applications

    Project mention: Promptmap: A prompt injection scanner for LLM applications | news.ycombinator.com | 2025-01-22
  19. minima

    On-premises conversational RAG with configurable containers (by dmayboroda)

    Project mention: Deployable On-Premises RAG | dev.to | 2025-03-03
  20. transcriptionstream

    turnkey self-hosted offline transcription and diarization service with llm summary

  21. tools

    A set of tools that gives agents powerful capabilities. (by strands-agents)

    Project mention: Strands Agents now speaks TypeScript: A side-by-side guide | dev.to | 2025-12-04

    The Strands Agents documentation is the best starting point, with guides for both SDKs. The source code is available on GitHub for both the Python SDK and the TypeScript SDK. For Python, the community tools package provides ready-to-use tools, and the samples repository has complete example agents.

  22. AudioMuse-AI

    AudioMuse-AI is an Open Source Dockerized environment that brings automatic playlist generation to Jellyfin, Navidrome, LMS and Lyrion. Using powerful tools like Librosa and ONNX, it performs sonic analysis on your audio files locally, allowing you to curate the perfect playlist for any mood or occasion without relying on external APIs.

    Project mention: AudioMuse-AI: Local Sonic Analysis for Auto-Playlists on Jellyfin and Navidrome | news.ycombinator.com | 2025-12-14
  23. genv

    GPU environment and cluster management with LLM support

  24. chaplin

    A real-time silent speech recognition tool. (by amanvirparhar)

    Project mention: Chaplin: Local visual speech recognition (VSR) in real-time | news.ycombinator.com | 2025-02-02
  25. Owl

    A personal wearable AI that runs locally (by OwlAIProject)

  26. SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

Python ollama discussion

Python ollama related posts

  • You’re Talking to Your AI Wrong. Here’s How to Fix It.

    1 project | dev.to | 10 Dec 2025
  • AWS Strands: Sequential Multi Agent Workflow

    2 projects | dev.to | 30 Nov 2025
  • AI Infrastructure on Consumer Hardware

    8 projects | dev.to | 21 Nov 2025
  • Ollama has a native front end chatbot now

    16 projects | news.ycombinator.com | 30 Jul 2025
  • AI: Introduction to Ollama for local LLM launch

    8 projects | dev.to | 20 Jul 2025
  • Clickclickclick: Framework to enable autonomous, computer use using any LLM

    1 project | news.ycombinator.com | 28 Jun 2025
  • Is there a way to run an LLM as a better local search engine?

    3 projects | news.ycombinator.com | 18 Jun 2025
  • A note from our sponsor - Stream
    getstream.io | 22 Dec 2025
    Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure. Learn more →

Index

What are some of the best open-source ollama projects in Python? This list will help you:

# Project Stars
1 ragflow 70,257
2 ChuanhuChatGPT 15,416
3 AstrBot 14,197
4 shell_gpt 11,596
5 ollama-python 9,034
6 agentops 5,133
7 LEANN 5,028
8 Kiln 4,485
9 sdk-python 4,714
10 Devon 3,465
11 elia 2,325
12 oterm 2,285
13 comfyui_LLM_party 2,038
14 openai-edge-tts 1,478
15 Alpaca 1,354
16 promptmap 1,071
17 minima 1,021
18 transcriptionstream 904
19 tools 854
20 AudioMuse-AI 774
21 genv 653
22 chaplin 625
23 Owl 623

Sponsored
InfluxDB – Built for High-Performance Time Series Workloads
InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
www.influxdata.com

Did you know that Python is
the 2nd most popular programming language
based on number of references?