Donate NowDonate Now
Article

A Gifted Youth Perspective: Human Intelligence during the Age of AI

Share article

Author’s Note: In today’s technological era, certain terms are often used interchangeably. To avoid confusion, this article mentions “AI” to refer to all types of artificial intelligence (ANI, AGI, and ASI), which are defined within the content. Please note that this article does not aim to shed a completely negative light on AI, nor to discourage its development or use. Rather, it seeks to explore the evolving relationship between humans and AI.

AI is a Potential Threat to Humans

The development of current AI models already challenges the importance of the natural mind. The key distinction between current AI tools, such as ChatGPT, and human brainpower is that AI can only perform tasks that it was programmed to do. Therefore, such tools can be referred to as artificial narrow intelligence (ANI). On the other hand, people theoretically have unlimited capacity to learn new things, and they are not typically constrained in the same way that ANI models are.

But if this contrast has already been identified, why should we still worry about defining intelligence? Undeniably, there is a looming possibility that this logic may no longer apply in the future. This can be attributed to existing projects aimed at developing artificial general intelligence (AGI). Currently theoretical, AGI would be able to match human cognitive capabilities across a wide range of mental tasks. The ethical dilemmas associated with such projects already raise worries among citizens.

But it does not stop there.

Efforts to create artificial superintelligence (ASI) are already underway. Another conceptual form of AI, ASI is considered the pinnacle of AI design. If it came into existence, it would surpass existing mental faculties. Thus, the importance of a person’s analytical capacity is facing, and will continue to face, new dangers in the age of AI.

Rethinking Human Intelligence

As previously mentioned, individuals often attribute considerable importance to our intellect. If we do not evolve our understanding of human sagacity, AI may one day render human existence meaningless. Conscious thought can no longer primarily be regarded as being “book smart,” an adjective for someone who is academically knowledgeable. AI can already replicate this in a much faster and more advanced fashion.

Therefore, we must look to other types of intelligence, articulated by psychologist Howard Gardner. Some of these categories rely on biological factors.

First, bodily-kinesthetic intelligence involves controlling body movements. Although this may seem purely physical, the brain enables awareness of how the body maneuvers. If an AI, such as a large language model (LLM), does not have a “body,” it is not capable of controlling its own body. Even if robots use sensors to detect their own movements, they do not perform the same bodily functions as humans. Additionally, AI lacks the survival instincts that organisms have, since it does not live. It simply exists.

Speaking of life, existential intelligence has been proposed as a possible addition to Gardner’s Theory of Multiple Intelligences. Characterized by profound questions about human life, existential intelligence relies on philosophy. “What is the meaning of life, death, and the universe?” continues to be contemplated by individuals. Since automation is inanimate, it cannot ponder its own life or death.

Moreover, intrapersonal intelligence requires self-awareness and a deep understanding of one’s own emotions. Interpersonal intelligence focuses on the awareness of the emotions of others, including the comprehension of “social cues.” It is important to recognize one’s own feelings before acknowledging the sentiments of others. These abilities go beyond simply labeling emotions. The beauty of the human mind is that it can feel. In contrast, AI can only mimic emotion based on what people taught it. It lacks the necessary biological factors to experience emotion, such as a nervous system and hormones. Such emotions are as real as those of an actor playing a role.

In essence, human cognition is important because it is real. Rather than being blindly fed knowledge, wisdom is acquired through lived experiences. Perhaps AI will never be “all-knowing” if it is incapable of personal experiences.

My Personal Experience

In elementary school, a few of my classmates and I were placed in the Advanced Academics Program (AAP), a class with a rigorous curriculum, based on scoring high on the Cognitive Abilities Test (CogAT). The CogAT Test measures a student’s reasoning and problem-solving abilities. Therefore, from a young age, I was conditioned to believe that intelligence was simply a person’s ability to think logically and essentially be “book smart.”

When I joined Mensa in March of 2025, I was exposed to meaningful conversations about the importance of intelligence, especially in the era of AI. The discourse inspired me to ponder this topic myself, and I realized that there is nothing inherently wrong with valuing intellect. Rather, it is important to be cognizant of which definition of intelligence we are valuing. Learning more about the relationship between human sagacity and AI prompted me to reform how I viewed intellect. Contrary to my outlook as a child, I realized that intelligence is so much more than scoring high on an IQ test.

Shaping the Future

A broader definition of human intelligence should not exclude factors such as academic achievement and IQ scores. Instead, its denotation should simply include different forms of wisdom. Organizations like the Mensa Foundation can foster this by supporting research exploring multifaceted forms of intellect.

I would like to see AI attempt to challenge the significance of that.

Natalia V

Natalia V is an award-winning debater and an aspiring lawyer. Natalia joined Mensa in March of 2025 and is a proud member of the Mensa Honor Society. Natalia speaks English, Russian, French, and Polish. When she is not studying, she enjoys spending time with family and friends.

Comments

1

Very aptly presented the contrast between Human Intelligence and Artificial Narrow Intelligence.From my experience the biggest threat is developing human dependency on these AI tools even for the smallest task like writing an email or a task involving more cognitive and analytical thinking like writing a code.This would handicap the human cognition and rationale thinking.