Find secrets with Gitleaks 🔑
- Updated
Jul 9, 2025 - Go
Find secrets with Gitleaks 🔑
Secure, high-performance AI infrastructure in Python.
CLI for running large numbers of coding agents in parallel with git worktrees
Finetune LLMs on K8s by using Runbooks
OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)
开源的智能体项目 支持6种聊天平台 Onebotv11一对多连接 流式信息 agent 对话keyboard气泡生成 支持10+大模型接口(持续更新) 具有将多种大模型接口转化为带有上下文的通用格式的能力.
Carbon Limiting Auto Tuning for Kubernetes
This project allows to launch your Telegram bot in a few minutes to communicate with free or paid AI models via OpenRouter.
WebUI for OpenAI, Ollama and Anthropic
Overengineered telegram bot that functions as a chatbot
An AI agent workflow engine designed for scale
Democratizing AI Innovation to the Masses on Commodity Hardware
A tui/cli tool for interfacing with a LLM fine-tuned on various language tasks. It emphasizes on making the user see the changes made in order to learn
This project provides a Kubernetes Operator for managing the lifecycle of the inference-gateway and its related components. It simplifies deployment, configuration, and scaling of the gateway within Kubernetes clusters, enabling seamless integration of inference workflows.
LLM-Profiler is a tool for evaluating the performance of online serving engine for LLMs
AI based Kubernetes deployment autoscaler
Enriching health care data with prompt generated summaries.
runtime for agent-based applications, conversational interfaces, and knowledge-grounded automation
Add a description, image, and links to the llm-inference topic page so that developers can more easily learn about it.
To associate your repository with the llm-inference topic, visit your repo's landing page and select "manage topics."