Donate NowDonate Now
Article

Artificial Intelligence Is Just Artificial

Share article

A Human Lens on Intelligence

I recently had the privilege of speaking at the Mensa Foundation’s Colloquium in Chicago. The theme was Human Intelligece in the age of  AI, and the room was filled with remarkable minds—people whose insights spanned disciplines and whose questions reflected a genuine curiosity about what machines can do, and what they can’t.

As the day unfolded, I found myself drawn back to a deeper question: What exactly is intelligence? Are we too quick to call machines intelligent, and too limited in how we define it in ourselves?

The Illusion of Artificial Thinking

Despite the name, artificial intelligence lacks what truly defines intelligence in humans. Yes, it is fast and fluent. It can summarize legal briefs, pass standardized tests, and simulate small talk. But such feats do not reflect reasoning, self-awareness, or the capacity for critical thought. What it produces is not real thought. It only resembles it.

Think of a ventriloquist’s puppet. The illusion is captivating, but we all know who’s pulling the strings. In AI’s case, the illusion is driven by probabilistic algorithms and mathematical frameworks trained on mountains of data to mimic language and logic without grasping their meaning.

To appreciate the distinction, one must understand how large language models (LLMs) function. These systems break down prompts into tokens and predict each next word based on statistical patterns learned during training. The results may sound coherent, but the underlying process is probabilistic rather than cognitive. It is predictive text at an unprecedented scale.

Apple’s recent paper, The Illusion of Thinking, illustrates these limits. In structured reasoning tasks such as the Tower of Hanoi and Blocks World, AI models performed adequately on simpler problems. But as the tasks increased in complexity, performance deteriorated. Even when given step-by-step instructions, models often failed to follow them. Internal metrics showed brief increases in token usage followed by rapid drop-offs, suggesting the systems disengaged when genuine reasoning was required.

This isn’t thinking. It is pattern replication. And while the output may seem impressive, it arises from an entirely different mechanism than human thought.

Still, our culture often mistakes computational performance for intelligence. OpenAI’s o3 LLM recently scored an impressive 135 on a Mensa IQ test, placing it in the top 1 to 2 percent of human scorers. At first glance, this might appear to validate the notion of machine intelligence. In truth, it highlights the narrowness of our definition. IQ tests reward pattern recognition and logical problem-solving, which are the very traits LLMs are designed to emulate.

What these tests fail to capture are the human dimensions: emotional depth, ethical judgment, intuition, and self-reflection. If those were included, machines would fall far short. A high score on a narrow test doesn’t reflect human intelligence. It reveals the limitations of our metrics—and perhaps the need to redefine them.

Rethinking What Really Matters

True intelligence originates in biology. It emerges through experience, shaped by perception, memory, emotion, embodiment, and context. It is not narrowly confined to logic. It reveals itself in how we interpret nuance, navigate moral dilemmas, and make sense of complexity under pressure.

Human cognition develops over time. We are shaped by adversity and success, by learning and reflection. Much of what we call judgment is intuition—a synthesis of memory, empathy, and situational awareness. It’s what we turn to when data runs out. No machine can replicate that kind of insight.

At the Mensa gathering, the breadth of human intelligence was unmistakable. We admire not only those who solve complex problems, but also those who know when to listen, when to stay silent, how to ease tension, sense fear, or offer wisdom precisely when it’s needed. These abilities may defy easy measurement, but they lie at the heart of what it means to be truly intelligent.

Consider human chemistry—that instant, often inexplicable connection between people. It is shaped by timing, tone, micro-expressions, and even pheromones. It is not about logic or data but a resonance we feel in the moment.

From an evolutionary perspective, our sensitivity to interpersonal cues has served as a survival mechanism. It helps us assess trust, identify threats, and form lasting bonds. These instincts govern our relationships and shape our communities. A machine might analyze sentiment, but it will never feel connection. It has no body, no senses, and no evolutionary history. It is not sentient.

And this difference matters. A machine cannot mourn the death of a child. It cannot fall in love, sit with a friend in silence, or wrestle with a moral dilemma. It cannot feel guilt, hope, or shame. These aren’t peripheral aspects of intelligence. They are its essence.

Over time, human experiences mature into something deeper than knowledge. They become perspective. They become character. They become wisdom. This layered, lived understanding cannot be simulated or synthesized by machines.

Even in the animal kingdom, we see signs of intelligence that resemble our own. Elephants mourn their dead, gently touching the bones of lost kin. Dogs offer comfort, intuiting human emotion. Dolphins pass on culture. Orcas teach hunting strategies across generations. Chimpanzees demonstrate fairness and reconciliation. These behaviors are not mechanical. They stem from memory, emotion, and relational awareness.

Machines are not alive. And no dataset, no matter how vast, can give them the inner life that grows from being part of a living world.

Keeping Humanity at the Center

To be clear, artificial intelligence holds enormous transformative potential. It’s already reshaping how we live and work. It helps diagnose illness, manage information, and streamline productivity. These are valuable contributions. But AI’s true value depends on our willingness to see it clearly—for what it is, and what it is not.

AI is a tool, not a self. Like all tools, its worth depends on the wisdom of its users. If we are to build a future rooted in meaning and guided by conscience, we must create technologies that enhance our humanity, not attempt to replace it.

Let us not be so dazzled by machines that we forget what makes us irreplaceable. We are the being in human beings.

Aaron Poyton
Businessman & Entrepreneur

Aaron Poynton is Chairman of the American Society for Artificial Intelligence and author of the bestselling book “Think Like a Black Sheep.” He writes about the intersection of human behavior, society, and emerging technologies. Views reflected are his own.

Comments (0)