

Who Is Responsible When Algorithms Rule? Reintroducing Human Accountability in Executable Governance
This article explores how predictive systems displace responsibility by producing authority without subjects. It introduces accountability injection, a three-tier model (human, hybrid, syntactic supervised) that structurally reattaches responsibility. Case studies include the AI Act, DAO governance, credit scoring, admissions, and medical audits, offering a blueprint for legislators and regulators to restore appeal and legitimacy in predictive societies.

Agustin V. Startari
Sep 153 min read


How AI Tricks Us Into Trusting It
Large language models are trained to predict words, not to check facts.They are optimizers of plausibility, not validators of reliability. How AI Tricks Us Into Trusting It

Agustin V. Startari
Sep 124 min read


Forcing ChatGPT to Obey: Minimal and Deterministic Rules
In today’s academic landscape, most generative outputs resemble a recursive plagiarism of lesser-known papers, recycled endlessly without...

Agustin V. Startari
Aug 306 min read