You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integrations in applications via LLM APIs. 🚀💥
OllaDeck is a purple technology stack for Generative AI (text modality) cybersecurity. It provides a comprehensive set of tools for both blue team and red team operations in the context of text-based generative AI.
A high-performance string formatter written in Rust. This project detects and blocks LLM prompt injection and jailbreak attacks. It also features a customizable rule-based system and defends against obfuscated prompt attacks.
A series of serverless functions/resources (and Terraform) for consuming language model inputs and outputs to S3, enriching the data via sentiment analysis and topic modelling, loading to DynamoDB and subsequently monitoring for configurable deviation within the latent vector space.
The Citadel is not just a training platform; it is a battleground. As AI systems integrate deeper into our critical infrastructure, the attack surface expands exponentially. This application is a purpose-built LLM Pentesting Environment designed to simulate real-world threats against Large Language Models.
A powerful, community-curated toolkit to attack, evaluate, defend, and monitor Large Language Models (LLMs) — covering everything from prompt injection to jailbreak detection.
This repository contains the implementation and benchmarking framework for AI Guardrails, specifically focusing on hallucination detection in Large Language Models (LLMs). The project was developed for PyCon Taiwan 2025 presentation on AI safety and reliability.