Philosophy
Technology
Updated on Sept. 7, 2024
5 Reasons Why ChatGPT is Not a General Intelligence
Discussing 5 reasons why Large Language Models (LLMs) do not satisfy the properties required for Artificial General Intelligence (AGI), the kind of AI we envision living alongside in the future.
By Alexander Waterford
Researchers continue to debate how far we are from living alongside machines with sci-fi-like intelligence or whether such a future is even possible. In this article, I’ll loosely limit our discussion to three key features you’d expect an Artificial General Intelligence (AGI) to have—features that current state-of-the-art Large Language Models (LLMs), like ChatGPT, lack.
At their core, LLMs are text-generating machines that, given a set of words, predict the most probable next word based on massive amounts of text they were trained on. By some mathematical miracle, this technique simulates an intelligence that can often fool us into thinking it's smarter than we are. LLMs are highly beneficial and are already being used to optimize workloads. Check this article.
Yet, here are 5 reasons why LLMs are not AGI:
Astronomers are generally capable of verifying statements about bodies in space with intuitions about laws governing physical systems
1. Their language is disconnected from natural systems.
First and foremost, LLMs don't know what the world is or how things within it relate to each other. This leads to the following:
- They Can't Verify or Refute: LLMs can't refute or verify any sentence they generate.
Unless the trustworthiness of different training sources is ranked. But this also won't lead to a grounded understanding, only an arbitrary preference between sources. - Rigid Creativity: They can't fashion new, reasonable concepts themselves from an understanding of nature. So, they can't connect the dots, unless a fact was explicitly present in their training data, or at least a link between two facts.
- No Conceptual Similarity Recognition: If two concepts are identical, they have no way of realizing their similarity unless some piece of data in their training set connects the two. Which leads to a lot of duplicate knowledge.
Imagine the following scenario:
A human that since infancy, has been told nothing but lies about the natural world. He, in fact, has the possibility to refute all knowledge he ever possessed. And then he could start
forming interconnected linguistic facts affirmed with a natural comprehension.
Whereas an LLM has no way out of its ingrained web of lies. In fact, if presented with a true fact, it might even dismiss it.
An army of specialized slave robots. Generated with Copilot.
2. They Have No Will or Purpose
Unlike reinforcement learning systems,
LLMs don't understand what their end goal is—what their purpose is. In most depictions of AGI,
we imagine it having its own will and purpose, which drives fear and suspicion of a future involving such creatures.
(It makes more sense to call them "creatures" than it does for humans.)
Would a truly sentient being need a goal? A simple answer is no.
Evolution endowed Homo sapiens with faculties general enough to describe and model the workings of our environment. But did we need all this to survive and procreate? Much simpler organisms have stayed longer with barely any such abilities. So, we can safely separate the setting out of which intelligence emerged, from its properties or identifiers. As they are now, LLMs have no will of their own unless commissioned to do harm by a bad human agent.
A poet is capable of constructing literally nonsensical sentences, but metaphorically meaningful.
3. They Lack Intuition
Once they are in unfamiliar semantic territory, LLMs start to "hallucinate."
It's not that intelligent beings can't talk nonsense, but humans can do so with style, through the power of metaphors.
We attribute verbs and adjectives to nouns that don't actually possess them in reality (in a literal sense), yet we all seem to understand what they mean.
This goes back to the fact that LLMs don't know the natural world, and they have no sense of how language syntax relates to it.
Our intuition is not limited to only that. In unfamiliar terrains, humans are capable of weighing options reasonably,
grounded in their internal models about how the world works.
Evolution has endowed humans with emotions to allow response to environmental challenges, communicate needs, and form cooperative relationships.
4. They Cannot Comprehend What it's Like to Feel
Because they are the product of our feeble creation, in terms of function they will most probably triumph.
But in terms of legitimacy to exist in the world, they can never top Homo sapiens.
We are old souls, the product of millions of generations of natural sculpting (by natural selection).
One could say that the universe felt guilty for its cold ways—ways that demolished entities as if they were nothing.
In response, nature made these little procreating universes, called homo-sapiens. Universes that in a blind toss of a chemical coin became aware
of themselves and the universe they're doomed to keep propagating in.
Our experience is overwhelmed by our sensitivity to the world around us: the sun, animals, bugs, plants, and the sea.
Every adaptive trigger we have tells a story about the ancient settings our ancestors survived.
Most events that come to our attention mean something to us.
How could you explain to an LLM what the cost of being sentient is? Recognizing how precious it is,
what it feels like to be aware of every input you take, and how personal every photon that hits your retina,
and every disturbance that washes on your eardrum, is.
Did you enjoy reading all of this emotional nonsense? Well, that's the only point of this section.
Having no access to the internal emotional worlds that exist inside us, any reasoning process that involves us will inevitably lead to its incompleteness.
Gorillas are thought to be conscious because they demonstrate self-awareness, complex emotions, and intentional behavior.
5. We May Never Be Sure of Their State of Consciousness
We don't know when or how conscious experience emerges. Our understanding of physics and the mathematical models we use to represent reality, however sophisticated, do not explain the emergence of subjective experience.
Any intelligence or algorithm we create will inherently possess a mysterious aspect. What’s even more concerning is that LLMs, which do not even qualify as AGI, are already considered black boxes—uninterpretable by humans. The internal workings of these models, the variables, and the matrices involved in generating each word are as obscure to a data scientist as they are to a layperson. Researchers from different fields are working hard to improve AI interpretability, but the path remains long and uncertain.
Conclusion
LLMs like ChatGPT demonstrate incredible capabilities, but they fall short of the AGI we imagine. While these models are hitting the limits of their current potential, we can hope for future breakthroughs—akin to the invention of transformer neural networks—that will push the boundaries of what machines can achieve. Until then, the dream of a truly intelligent, sentient machine remains just that—a dream.