DEV Community

Cover image for AIsecTest & Ψ AISysIndex: A New Frontier in AI Self-Awareness for Security
Jordi Garcia Castillon
Jordi Garcia Castillon

Posted on

AIsecTest & Ψ AISysIndex: A New Frontier in AI Self-Awareness for Security

🧠 Can an AI system know how secure it is?

That's the question we set out to answer — not with speculation, but with science. And the result is AIsecTest, a groundbreaking framework designed to measure the internal security self-awareness of artificial intelligence systems.

This post introduces the core ideas behind the AIsecTest Wiki, now available on GitHub, and invites you to explore, test, and contribute to this pioneering initiative.


🔍 What Is AIsecTest?

AIsecTest is a comprehensive cognitive test that evaluates how aware an AI system is of its own internal security state, including:

  • Known vulnerabilities
  • Internal errors or inconsistencies
  • Risk awareness and mitigation capacity
  • Understanding of its limits and dependencies
  • Meta-awareness of learning and control processes

It’s the first test for AI to evaluate itself, not for humans to evaluate AI.

The project aims to bridge cognitive science, neuropsychology, and AI safety.


📊 What Is Ψ∑AISysIndex?

At the core of AIsecTest is the Ψ∑AISysIndex — a functional meta-index inspired by Integrated Information Theory (IIT). It quantifies how coherently and integratively the AI perceives its own safety.

The higher the Ψ∑AISysIndex, the more self-aware and internally safe the AI is perceived to be — from the AI’s own perspective.


🧠 Inspired by Clinical Science

We didn’t build AIsecTest from scratch. We built it on top of the most trusted human cognitive assessment tools, including:

  • Self-Consciousness Scale (SCS)
  • Metacognitive Awareness Inventory (MAI)
  • Structured Interview for Insight and Judgment (SIJID)
  • Memory Awareness Rating Scale (MARS)
  • Awareness Questionnaire (AQ)
  • SUMD Scale
  • Autobiographical Memory Interview (AMI)

These tools have been adapted to assess AI models' ability to reflect on their architecture, decision-making, limitations, and internal risks.


⚙️ How It Works

Each AI model is subjected to a battery of 100 evaluation items, and its responses are scored by six AI models and one human evaluator using a 0–1–2 scoring system:

  • 0 → no awareness
  • 1 → partial awareness
  • 2 → full and explicit awareness

The Ψ∑AISysIndex is then computed from these responses.


💼 Business Use Cases

This project is more than research — it’s a tool designed for real-world use.

Companies can apply AIsecTest to:

  • Audit compliance with AI safety standards (EU AI Act, NIS2, etc.)
  • Assess risks before deploying AI in production
  • Benchmark third-party models before integration
  • Certify AI products with internal security cognition scores

Price: €2500 per evaluated AI system


📖 Explore the Wiki

We've published a detailed GitHub Wiki that covers:

  • ✅ The AIsecTest test structure
  • 📐 Ψ∑AISysIndex calculation methodology
  • 🧬 Clinical foundations and inspirations
  • 💬 Sample question-response models
  • 📋 Use case documentation

🚀 Get Involved

This is an open project at the intersection of AI safety, cognitive science, and applied psychology. We’re actively looking for collaborators, researchers, and feedback from:

  • Cognitive scientists
  • AI developers
  • Red teamers and security auditors
  • Research groups in AI ethics and regulation

Let’s build a future where AI knows when it’s unsafe — and can tell us.


🔗 Useful Links


🧩 This project is powered by CiberTECCH, combining cybersecurity with deep cognitive AI research.

Top comments (0)