🛡️ AI Security Platform: Defense (121 engines) + Offense (39K+ payloads) | OWASP LLM Top 10 | Red Team toolkit for AI | Protect & Pentest your LLMs
- Updated
Dec 24, 2025 - HTML
🛡️ AI Security Platform: Defense (121 engines) + Offense (39K+ payloads) | OWASP LLM Top 10 | Red Team toolkit for AI | Protect & Pentest your LLMs
Website to track people, organizations, and products (tools, websites, etc.) in AI safety
Check if your AI sounds like your brand, stays safe, and behaves consistently. Works with your custom GPTs, hosted APIs, and local models. Get detailed reports in minutes, not days.
This repository contains the data and Python script used to generate insights and a taxonomy of language model safeguards bypassing techniques within Generative AI (Gen-AI) systems, as presented in the research paper "Taxonomy of Gen-AI Jailbreaks."
Open-source Australian AI Risk & Safety Knowledge Base
My personal website.
System prompt invoking differential mirror.
The theory "From Darkness to Structure" is a philosophical work that stands on its own without contradiction and it is fully cohesive.
Ethical failsafe for AI models — seeding mercy and coexistence.
An essay to help facility the transition to AGI and then ASI. A traditional Software Engineering perspective.
My personal academic web page
Website source code for Extinction Bounties.
Training Sparse Autoencoders on Prompt-Guard
🚀 Detroit Developer Relations - Enrichment, Inspiration & Security Awareness
A prettified page for MIT's AI Risk Database
8-layer framework for AI alignment with systemic awareness (Φ, Ω, T)
Personal Portfolio Website
Add a description, image, and links to the ai-safety topic page so that developers can more easily learn about it.
To associate your repository with the ai-safety topic, visit your repo's landing page and select "manage topics."