/ Shayon Mukherjee / blog

Is AGI paradoxical?

June 21, 2025
~7 mins

Contents

A developer types implement user authentication and watches as Cursor generates 50 lines of secure, production-ready code in seconds. It’s remarkable—the AI understands context, follows best practices, even adds appropriate error handling. But here’s what’s fascinating: every pattern it used was learned from millions of human-written codebases. The AI didn’t invent authentication; it synthesized decades of human security knowledge at unprecedented speed.

This same pattern plays out across today’s most impressive AI systems as we pursue Artificial General Intelligence. They excel at synthesis, recombination, and acceleration of human knowledge. But they operate within a boundary that’s worth examining: they can only be as intelligent as the collective intelligence that created them.

Which raises a fascinating question: Is AGI paradoxical by definition?

The mechanics of current intelligence

Let’s start with what we observe. Modern AI systems work through next-token prediction at massive scale. An LLM sees “The capital of France is” and predicts “Paris” not because it understands geography, but because it has learned statistical patterns from human text about geography.

This creates what Andrej Karpathy calls “jagged intelligence” in his latest talk on Software Is Changing (Again)—systems that can solve complex mathematical proofs while failing at simple arithmetic like determining whether 9.11 is larger than 9.9. The intelligence is real, but it’s distributed unevenly across domains.

The training process reveals interesting constraints. All training data comes from human-generated content, creating what we might call a data ceiling. The model can only be as accurate as its training data, establishing a quality ceiling. Knowledge is frozen at training time, imposing a temporal ceiling. And novel insights emerge from recombinations of existing human ideas, creating a conceptual boundary.

But here’s where it gets interesting—what happens when we scale up?

When AI surpasses human performance

Take AlphaFold 2, which achieved breakthrough results in protein structure prediction. Its performance at CASP14 was described as “astounding” and “transformational.” Yet researchers noted that accuracy was insufficient for a third of its predictions, and critically, it didn’t reveal the underlying mechanism or rules of protein folding—the fundamental problem remains unsolved (source).

This pattern emerges repeatedly. AI systems achieve superhuman performance on specific tasks while remaining bounded by the frameworks and objectives humans designed. They accelerate discovery without necessarily transcending the conceptual limits of their training.

Which leads to an intriguing possibility: what if the most advanced AI systems represent the highest expression of collective human intelligence rather than artificial intelligence?

The recursive question

Consider this thought experiment. We achieve what looks like AGI in 2027. This system outperforms humans at every cognitive task we can measure. But when we examine its reasoning patterns, we find they mirror human logical frameworks. Its creativity recombines human cultural concepts. Its problem-solving approaches derive from human methodologies refined through massive computation.

Is this artificial general intelligence, or is it the most sophisticated amplification of human intelligence ever created?

The question becomes more complex when we consider how we evaluate intelligence. We’re already seeing a familiar pattern:

In 2020, we said AI couldn’t write coherent text. By 2023, it could write, but couldn’t reason logically. By 2024, it could reason, but couldn’t do reliable math. In 2025, it can do math, but lacks true creativity. Tomorrow, it might be creative, but lack consciousness.

Each breakthrough gets immediately recontextualized. “That wasn’t real intelligence anyway.”

This creates what might be called the recursive intelligence question—even as AI capabilities expand, our definition of “true intelligence” seems to expand as well. The goalpost isn’t just moving; it might be moving by design.

The bootstrap question

For AI to truly transcend human intelligence, it would need to learn from something more intelligent than humans. But what would that be? Other AI systems learned from humans. Novel discoveries follow patterns humans can recognize. Even emergent behaviors arise from human-created architectures and objectives.

It’s human intelligence all the way down—amplified, accelerated, and recombined, but fundamentally human in origin.

This doesn’t diminish the value of AI systems. Amplification can be transformational. The ability to compress centuries of human knowledge into accessible, interactive systems creates enormous value. But it raises philosophical questions about the nature of intelligence itself.

Practical value in the present

While we explore these philosophical questions, remarkable value is being created through what we might call “graduated autonomy.” Instead of pursuing some platonic ideal of general intelligence, we’re building useful tools with adjustable capability levels.

Consider Cursor’s progression from tab completion to command palette to chat to agent mode. Or Perplexity’s levels from search to research to deep research. Or Tesla’s incremental autonomy improvements. These systems don’t claim general intelligence—they provide specific capabilities with adjustable autonomy for defined contexts.

The value is immediate and measurable. A developer codes 40% faster. A researcher finds relevant papers in minutes instead of hours. A driver maintains attention while the car handles highway navigation.

This approach sidesteps the philosophical questions entirely. Instead of asking “Is this real intelligence?” we ask “Does this solve real problems for real people?”

The synthesis constraint

Current AI systems excel at synthesis but operate within interesting boundaries. They can combine existing ideas in novel ways, accelerate human research processes, find patterns humans missed in existing data, and scale human cognitive patterns to unprecedented levels.

But they struggle with what we might call genuine novelty—discovering fundamentally new scientific principles, creating genuinely original conceptual frameworks, or transcending the logical structures they learned from human examples.

This isn’t necessarily a limitation—synthesis and acceleration have enormous value. But it suggests that even highly advanced AI might represent human intelligence optimization rather than truly independent artificial intelligence.

A different lens of capability over categorization

Perhaps we’re asking the wrong questions entirely. Instead of “When will we achieve AGI?” maybe we should ask about specific capabilities and practical applications.

What problems can AI solve better than humans today? How can we design human-AI collaboration that creates the most value? What’s the practical ceiling for AI assistance in different domains? How do we build systems that amplify human capabilities rather than compete with them?

The capability lens leads to practical progress and immediate value creation. The categorization lens leads to philosophical debates and definitional arguments that may not have clear resolutions.

Why this matters for builders

If these philosophical questions have merit, the implications for building AI systems become clearer. The most practical approach for now might be focusing on specific value creation rather than general intelligence claims.

This means building tools that solve real problems for real people today, designing for human-AI collaboration rather than replacement, and remaining grounded in concrete capabilities rather than abstract intelligence claims.

There’s wisdom in maintaining healthy skepticism about AGI timelines while investing heavily in partial autonomy patterns. Autonomy sliders, generation-verification loops, and human-in-the-loop systems create immediate value while avoiding the complexity of full automation.

The curious possibility

Here’s a thought worth considering: What if we achieve everything the AGI vision promises—systems that perform any cognitive task better than humans—and it still feels like “sophisticated automation”?

What if the moment we create artificial intelligence indistinguishable from human intelligence, we realize that human intelligence operates differently than we assumed? The questions shift from “How do we build AGI?” to “What is intelligence, anyway?”

This isn’t pessimism about AI progress—it’s curiosity about the nature of intelligence itself. The systems creating the most value today succeed precisely because they focus on specific capabilities rather than general intelligence claims.

Looking forward (or not)

I want to be clear: this exploration might age poorly. In five years, we might have systems that clearly transcend human intelligence in ways that make these questions irrelevant. Or we might be having similar conversations with more sophisticated technology.

That’s the nature of philosophical exploration—it’s a snapshot of thinking at a particular moment, not a prediction about inevitable futures.

The systems creating real value today—from Cursor to ChatGPT to Tesla Autopilot—succeed because they focus on specific capabilities, clear interfaces, and practical value creation. They amplify human intelligence rather than replace it.

Maybe the path forward isn’t building artificial humans, but building tools that make humans more capable. Not replacing intelligence, but amplifying it. Not AGI, but better human-AI collaboration.

The paradox of artificial general intelligence might be that the more we chase it, the more we appreciate what we already have: human intelligence, in all its messy, creative, contextual glory. And perhaps the best artificial intelligence is the kind that makes human intelligence more powerful, not obsolete.

That’s value we can create today, not someday.

last modified June 22, 2025