The fastest Trust Layer for AI Agents
security ai-agents ai-agent llm-security ai-runtime llm-privacy prompt-security llm-guard llm-guardrails agentic-ai cx-agent
- Updated
May 28, 2025 - Python
The fastest Trust Layer for AI Agents
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
🛡️ AI Security Platform: Defense (121 engines) + Offense (39K+ payloads) | OWASP LLM Top 10 | Red Team toolkit for AI | Protect & Pentest your LLMs
Example of running last_layer with FastAPI on vercel
Add a description, image, and links to the llm-guard topic page so that developers can more easily learn about it.
To associate your repository with the llm-guard topic, visit your repo's landing page and select "manage topics."