This leaderboard isn’t a celebration. It’s a hallucination with a high score.
The First Time I Met AGI
I first heard the concept of AGI, real AGI, around 2015 or 2016.
Not in a blog post. Not in a product pitch. But through long hours of internal digging. Trying to grasp what "general intelligence" actually means in cognitive terms, not hype cycles or funding rounds.
Back then, it was hard to even wrap my head around it. It took me months to begin to understand what AGI implied: The scope. The risk. The ontological rupture it represents.
So I went deep. I co-founded Abzu with some truly brilliant people.
And I tried to follow the thread down: from models, to reasoners, to cognition itself.
And Now?
Now it's 2025. And worse, nearing 2026. And we’re flooded with noise.
People who have never studied cognition, never touched recursive reasoning, never even defined what intelligence means are telling the world that:
“AGI is almost here.”
No. It’s not.
And worse — AGI isn’t even scoped yet.
Not correctly. Not rigorously.
We don't even agree on what "general" means.
What AGI Actually Demands
Here’s what I’ve learned over nearly a decade, through ethology, architecture design, and cognitive experiments:
- Intelligence is not a monolith.
- It emerges from conflict. From separation of thought. From multiple perspectives that disagree, and then, sometimes, reconcile. There is no single model that can do that.
Why?
Because real intelligence isn’t just statistical next-token prediction. It’s contradiction, held in tension. It’s the interplay between memory and intuition. Between structured logic and emotional relevance.
Between what I know now and what I used to believe.
What That Leaderboard Gets Wrong
The ARC-AGI leaderboard shows dots climbing a curve.
Cost vs. performance.
Tokens in, answers out.
That’s fine, for task-solving. But AGI isn’t a task. AGI is a scope of adaptive cognition across unknown domains, with awareness of failure, abstraction, and reformation.
AGI needs to:
- Break itself apart
- Simulate internal dissent
- Reason in loops, not just in sequences
- Remember contradictions, not flatten them
- Develop subjective models of experience, not just text
None of that is visible in the chart. Because none of that is even attempted in most systems today.
The Real Danger
It’s not the models.
It’s the narrative.
Telling people AGI is near when the field hasn’t even defined what it is that’s not innovation. That’s cognitive malpractice. We’re building scaffolding over a void. And convincing the public that we’ve hit the summit when we haven’t even drawn the map.
What Needs to Change
We need to stop chasing scores and start building systems of cognition.
- Multi-agent reasoning
- Deliberation loops
- Memory with scoped decay and identity
- Contradiction-aware execution
- Traceable thought, not just output
That’s why I built OrKa. Not because I think I have the full answer.
But because I know for a fact that single-model intelligence will never be enough. If AGI ever emerges and that’s still an open question it won’t come from a bigger model. It’ll come from the orchestration of thought. From reasoning systems that can doubt themselves, disagree internally, and change their minds not just complete the sentence.
Final Word
To anyone who still believes AGI is a product you can wrap in a prompt:
Stop. To anyone who’s been told “we’re almost there”:
- Don’t listen to loud certainty.
- Listen to quiet contradiction.
- That’s where real intelligence starts.
And to the few of us who know the scope is still undefined:
Keep building. Keep doubting. Keep looping over your own beliefs.
That’s the only path that might, might, lead to something sensate.
Top comments (8)
THIS! Thank you 🙌
Yes, I'm a huge supporter of AI adoption, but you're 100% right. There's some major misconceptions around the concept as a whole and especially what it's really capable of doing, given it's current state.
Not only is this well written, but it's an innovative approach I haven't heard of before, which is exciting in itself (and given me some ideas on how to incorporate the topic in my own writings in the future).
Can't wait to see more of your work! đź’•
@anchildress1
Quick confession:
1 - The ideas are all mine...but... being honest DeepSeek just helped me organise the wording.
2 - What sounds “innovative” is really a throwback to Marvin Minsky’s 1950s vision: lots of small agents working together. Somewhere along the way we fixated on purely statistical models, and that’s why today’s LLMs, impressive as they are, still aren’t true intelligence.
I just try to treat them as sharp little tools that plug into a larger, genuinely intelligent system.
That’s why that AGI-hype chart makes me flinch. Excitement is fine, but someone has to call things as they are.
🤣 Oh, I agree with you completely! It baffles me sometimes, really...
If brains used spoken language as has been suggested by LLM conmen, all languages would be very similar, as all brains are very similar.
Should have studied neurolinguistics before building pointless data centres. Also, computers have difficulty taking advantage of quantum effects, so not much chance of producing the fairy dust.
Is never too late! And is fascinating se how AI acquire the Intelligent part reducing million of years of evolutions in few simple (wrong) statistic rules. Please do not take me wrong I think lates gen-ai progress are awesome and those system are super SMART but they have 0 intelligence... But you can see it only after understanding what Intelligence actually means and imply.
Can't wait to see more of your work!
This is a rare and necessary voice in a space too full of noise. Thoughtful, grounded, and unafraid to confront the uncomfortable gaps. Fully aligned.
hehehe thanks!