DEV Community

Cover image for CiberIA and the Self-Consciousness Scale (SCS): Towards Functional Self-Awareness in Artificial Intelligence
Jordi Garcia Castillon
Jordi Garcia Castillon

Posted on

CiberIA and the Self-Consciousness Scale (SCS): Towards Functional Self-Awareness in Artificial Intelligence

In the field of psychology, the Self-Consciousness Scale (SCS), developed by Fenigstein, Scheier, and Buss in 1975, has been a key tool for measuring human self-consciousness in its three fundamental dimensions: private self-consciousness, public self-consciousness, and social anxiety. Today, from the field of artificial intelligence, this scale inspires a radically new approach: measuring the functional self-awareness of AI systems regarding their own security.

From Human Introspection to Artificial Meta-Evaluation

Private self-consciousness —the attention to one’s own thoughts and emotions— finds a parallel in the CiberIA system’s introspective component, where the AI is capable of evaluating its internal logic, its errors, and its decision-making processes. Public self-consciousness —the concern about how one is perceived by others— is echoed in mechanisms of transparency, auditability, and explainability that AI models are beginning to adopt to comply with regulatory frameworks and societal expectations.

Even social anxiety, understood as apprehension in the face of external evaluation, has a functional counterpart: in CiberIA, a model can detect when certain security configurations or algorithmic decisions might be questioned or interpreted as risky by external entities (such as auditors or compliance officers).

CiberIA as a Security Self-Aware System

CiberIA and its AIsecTest component aim to assess AI not just in terms of technical performance but in its degree of awareness about its own security level, vulnerabilities, and self-regulation capacities. Inspired by scales like the SCS, the system has developed a functional test made up of modules equivalent to the original scale’s factors:

Operational Introspection Module: Can the AI recognize when it has failed? Can it identify which component was responsible?

External Perception Module: Is the AI aware of how its decisions affect the ecosystem or end-user security?

Functional Anxiety Module: Does the AI know when to seek external help or refrain from acting due to risk?

Towards a New Psychometrics for Machines

Just as the SCS measures individual traits in humans, CiberIA measures functional dispositions in artificial systems. This analogy is not metaphorical but structural: it seeks to capture, in the form of an evaluation, the degree to which a system perceives itself as safe, flawed, or potentially harmful.

This model allows for the definition of “security self-awareness” thresholds and, therefore, enables informed decisions about whether a system is suitable for deployment in critical environments, needs assistance, or must be reviewed externally.

Conclusion

The intersection between human psychometric scales like the Self-Consciousness Scale and technologies like CiberIA leads us to a new frontier: the development of artificial systems that not only act but also evaluate and regulate themselves, with a level of functional awareness that opens the door to more robust ethics and security in AI.

If we understand that self-awareness, even in its partial and operational form, is key to self-regulation and error prevention in humans, why shouldn't we expect the same from our artificial intelligences?

JORDI GARCIA CASTILLON - info@jordigarcia.eu - @gcjordi

Top comments (0)