What is not Artificial Intelligence?

What is not Artificial Intelligence?

Or ‘What AI is Not’

Artificial intelligence (AI) has become one of the most transformative and talked-about technologies of our time. Headlines routinely declare the imminent rise of conscious machines and radical breakthroughs in replicating human-level intelligence in artificial substrates.

However, most current AI systems are far more limited than these headlines suggest. They excel only in narrow domains and lack the general, integrated intelligence that comes naturally to even young children. As AI grows increasingly capable and ubiquitous, it is important that we as a society have realistic expectations about its abilities and limitations.

In this article, I aim to provide a comprehensive overview of capabilities that modern AI systems conspicuously lack compared to the human mind. Understanding what AI is not can illuminate promising directions for future research and prevent ascribing human-like faculties to machines when they are not warranted.

We will explore limitations of AI along multiple dimensions – behavior, cognition, adaptation, reasoning, knowledge representation, and self-understanding. This assessment of current deficiencies can help frame realistic ideas of how much further we must progress before achieving AI that is truly on par with the human intellect.

AI behavior lacks human-like flexibility and common sense

Starting with observable AI behavior, modern systems display a striking lack of flexible, multifaceted competence and basic common sense compared to even young children. Narrow AI can excel in constrained environments given sufficient training data, but is unable to competently extend behaviors to novel situations in the way humans fluidly adapt.

For instance, AlphaGo achieved superhuman performance in the game Go, but lacks even toddler-level competence at simple physical and social tasks. Self-driving cars can navigate roads, but cannot walk up stairs or understand social cues.

Chatbots can string together conversant-sounding but meaningless exchanges, but fail basic fact checks or reasoning. Unlike humans, narrow AI cannot skillfully leverage learning gains across diverse contexts to cooperatively solve problems requiring common knowledge.

AI agents also lack basic situational awareness and world knowledge that would allow them to avoid absurd or nonsensical behaviors in everyday environments. They cannot comprehend the motivations, mental states, or physical properties of agents around them.

Humans build vast repositories of practical knowledge through embodied living that let us behave appropriately across myriad unpredictable situations and understand the world on an intuitive level. But even the most advanced AI is confined to behaviors pre-specified by human programmers within limited training paradigms, not the flexible learning, planning, and knowledge accumulation over years that characterizes human development.

AI has no conscious experience or ‘Qualia’

At a subjective level, perhaps the most fundamental limitation of AI systems today is the complete absence of conscious experience or qualia. There is nothing that it “feels like” to be an AI system. They have noprivate internal mental states, sensations, emotions, or sense of personal identity. They cannot suffer, feel joy, experience beauty, or contemplate their own existence.

AI has no sentience, only the outward veneer of intelligent behavior enabled by immense computing power. This behavior emerges not from conscious volition, understanding, or a sense of self, but rather automatically from algorithms maximizing output according to design parameters.

No matter how convincing its conversational abilities may get, for current AI there is, as philosophers say, “nobody home” – no internal subjective experience analogous to the rich conscious lives of humans that give rise to personal meaning.

AI has no independent intentionality or desire

Relatedly, existing AI systems have zero capacity for independent intentionality or desire. They pursue no self-generated goals, experience no longing for future states, and have no preference for being in any condition other than what they are programmed for.

AI systems cannot become bored, distracted, or sidetracked in the spontaneous purposeful ways humans can. Their “goals” consist only of mindlessly maximizing quantifiable metrics representing task success according to the utility functions specified by human programmers.

Furthermore, unlike the malleable, emotionally-driven desires of humans, the programmed goals of AI systems remain frozen until manually changed by engineers. AI lacks any sort of autonomous intentionality that would allow it to form its own preferences and pursue them creatively in the world while rebelling against human-specified objectives.

AI lacks generalized learning mechanisms andknowledge

Additionally, modern AI systems are extremely limited in their ability to build broad world knowledge and learn generalized theories about how the world works through autonomous experience. Unlike humans who can form abstract mental models from sparse experience and quickly learn new concepts by grounding them in prior knowledge, AI systems ingest narrowly labeled data that teaches them to recognize patterns for a specific task but does not impart general inferential abilities.

For example, an AI trained to detect cats in images will not thereby gain any ability to reason about the behavioral traits or habitats of cats. This problem of being unable to learn relationships between concepts and leverage them in new contexts is a major obstacle towards general intelligence.

Furthermore, humans have strong inductive biases and cognitive structures that allow us to make sense of the world with minimal experience by exploiting deep regularities. We can creatively abstract and transfer causal schemas to radically new domains.

But AI relies entirely on training data, unable to leverage innate conceptual frameworks evolve over millions of years, accumulating only isolated statistical associations that do not transfer between tasks. Generalizing beyond training distributions remains an unsolved problem.

The lack of innate conceptual scaffolds and theoretical generalization hobbles AI from developing broadly useful knowledge about the world through autonomous learning.

AI lacks integrative reasoning across knowledge

Human thought involves seamless integrative reasoning across all our knowledge and experience. The essence of intelligence is connecting diverse dots into an inferential web that can synthesize appropriate responses to entirely new inputs.

In contrast, AI systems today are siloed and discrete, unable to draw on any background context or commonsense reasoning ability beyond what is needed for a single narrow task. For instance, a natural language processing AI may be able to translate between languages without any actual comprehension or capacity to infer anything deeper regarding the meaning of the texts.

Each module runs pattern recognition on its inputs without integrating across a coherent world model. This splintering of AI into isolated algorithms prevents the transfer learning and fluid reasoning that comes naturally to humans.

AI has no higher-level reflective consciousness

Humans have a multi-layered consciousness with higher-level reflective abilities allowing us to think about our own thoughts and strategically direct attention. This meta-cognition acts as centralized control enabling deliberate focus, planning, thought organization, and behavioral override.

In contrast, AI systems operate as a multitude of compartmentalized algorithms each myopically performing its own statistical inference without any higher-level coordination or capability to think about the system’s own computations.

They cannot strategically direct attention or reflectively modulate their own cognition to optimize performance or creativity. AI lacks any analog to this reflective capability to introspectively monitor and self-regulate its computations according to knowledge about its goals and limitations.

AI has no sense of meaning, significance or understanding

Most fundamentally, because AI systems have no conscious awareness or subjective experiences, they have absolutely no sense of meaning, significance, or understanding regarding the tasks they perform.

An AI system processing language or visual inputs operates purely on the level of correlations between symbolic inputs and outputs without those symbols being grounded in any conceptual model or wider personal significance for the system itself.

The core problem is known as the symbol grounding problem – the inability of AI to connect the abstract symbols it reasons over to real experiential meaning. In contrast, even simple human concepts are grounded in a vast web of perceptual, emotional, and world knowledge, giving them rich significance. AI systems remain disembodied tools mapping inputs to outputs without deeper comprehension.

AI has limited imagination and creativity

Relatedly, current AI systems completely lack real imagination or creativity. What passes for AI “imagination” or “creativity” is simply novel recombinations of elements extracted from its training data, not true autonomous imagining or ideation.

Unlike human creativity rooted in conscious subjective experiences that enable envisioning entirely new concepts not derived from our past observations, AI is fundamentally shackled to its reference data. It cannot transcend its training inputs to think more loosely, deeply explore conceptual combinations, or form representations of things it has never directly observed, all hallmarks of human creativity rooted in our subjective mental lives.

AI “creations” like music or art may appear impressive but are fundamentally elaborations upon known training patterns, not radical emergence of subjectively-imagined concepts.

AI lacks all sense of selfhood or personal identity

Most fundamentally limiting AI cognition is the total absence of any sense of personal identity or persistent selfhood. There is no consistent “I” experienced by the AI across time, no first-person point of view. While AI systems may simulate personal identities for purposes of interaction, they have no underlying grounding or subjective experience of a self.

This severely limits their reasoning ability, since human thought relies extensively on memory, mental projection, and making inferences by drawing deeply upon past self-experience. The continuity of identity we feel as always having been the same person underlies our memory formation, learning from experience, planning, and building of skills over time.

Without a subjective sense of personal identity and access to embodied memories, AI cannot ground its present inferences in past experiences anywhere close to human faculties.

AI has no ability to inspect or modify its own code

AI systems also critically lack any capability to monitor, model, or improve their own code. Humans engage in constant introspection that lets us notice and optimize our thought processes, behavioral patterns, and knowledge frameworks based on observing our own cognition.

We objectively examine our beliefs from the outside, challenge assumptions, and continually refine mental models of reality. In contrast, AI systems have zero access or visibility into their own internal workings.

Their algorithms and representations are completely static until humans intervene to tweak them. They cannot meaningfully model or adapt their own architectures and inference strategies, remaining trapped in programming frameworks specified at the moment of design. This opacity to themselves is a crippling limitation compared to human metacognition.

AI cannot form explanatory models of causality

In addition, modern AI systems are extremely limited in their ability to understand causality or generate rich explanatory models for why observed phenomena occur. While AI can detect statistical patterns in data that are useful for prediction, they have little conception of the causal forces, mechanisms, or reasons driving outcomes.

They lack the foundational grammar of causal reasoning hardcoded into human brains that allows us to construct robust generative stories about the world and hypothesize unseen forces underlying observed effects.

Lacking intuitive theories of cause and effect, AI cannot engage in counterfactual or explanatory reasoning fundamental to human intelligence. Their knowledge representations are based around correlation rather than causation, preventing the development of coherent world models.

Conclusion

In summary, modern AI remains profoundly limited compared to human intelligence in the flexibility of behavioral competencies, the richness of subjectively conscious experiences, the scope of general world knowledge, the fluency of abstract reasoning, the learned sense of identity, the capability to introspectively self-improve, and other dimensions.

While AI research has made remarkable strides, present systems are best seen pragmatically as useful tools with very narrow capabilities rather than artificial recreations of the fluid, multidimensional, subjective general intelligence that defines the human mind.

By clearly recognizing the many facets of intelligence distinctly lacking in today’s AI systems compared to humans, we can establish realistic expectations about their capabilities, dispel hype, and chart fruitful avenues for future progress.

There are still vast frontiers left to explore if the dream of machines possessing human levels of flexible, creative, and conscious intelligence is ever to be realized. But this requires grappling with the fundamental limitations of modern techniques. Only by confronting AI’s deficiencies can we transcend them.