Skip to main content
 

What Do We Mean When We Say “Intelligent”?

Richard Jonathan O. Taduran, Ph.D.  |  5 February 2026


In the age of artificial intelligence, claims that machines are now “thinking,” “understanding,” or edging closer to human intelligence have become a familiar feature of public conversation. Such claims are often framed in promotional language, frequently amplified by hype, and occasionally cast in apocalyptic terms. But once the spectacle is stripped away, the problem that remains is neither panic nor fear. It is something quieter and more consequential: imprecision. We have grown careless with the word intelligence, stretching it so broadly that it risks losing its explanatory value altogether.


So what do we actually mean when we say something is intelligent? Before applying the term to machines, it is worth revisiting how the sciences that study living beings have long defined it. Across psychology and anthropology, intelligence has never been understood as output alone. It is not synonymous with fluency, speed, or correctness. Instead, intelligence has been consistently tied to learning from experience, adapting to environments, and navigating the world over time. Much of today’s confusion stems not from new technologies, but from an old habit of using powerful words without sufficient care.


Intelligence Insights from Anthropology and Psychology


Anthropology approaches intelligence as a biocultural phenomenon, not as an abstract mental score detached from life. Few scholars have shaped this perspective more clearly than Frans de Waal, whose work fundamentally challenged how intelligence in animals—and by extension, humans—has been understood.


For de Waal, intelligence is not a ladder with humans at the top and other species trailing behind. It is a continuum shaped by evolution, ecology, and social life. Different species display different forms of intelligence because they face different problems. In Chimpanzee Politics (1982), de Waal demonstrated that chimpanzees engage in sophisticated social strategies: forming alliances, navigating hierarchies, reconciling after conflict, and anticipating the behavior of others. These are not reflexive or mechanical actions. They are learned, flexible responses to social environments that shift over time. Intelligence, in this view, is not merely problem-solving; it is the capacity to navigate complex social worlds.


Crucially, this intelligence is embodied. It arises from bodies moving through environments, perceiving threats and opportunities, forming attachments, and responding to cooperation, competition, and loss. Cognition is inseparable from perception, affect, and action. Minds do not float above bodies; they emerge through them. De Waal showed that many animals are capable of empathy, consolation, fairness, and even grief. Emotional capacities are not obstacles to intelligence; they are integral to how learning and adaptation occur in social species.


In Are We Smart Enough to Know How Smart Animals Are? (2016), de Waal further challenged the habit of judging animal intelligence through narrow human benchmarks such as language or tool use. Intelligence, he argued, must be evaluated on the organism’s own terms. A squirrel’s spatial memory, a bat’s echolocation, or a bonobo’s prosocial behavior are not lesser forms of intelligence. They are highly specialized solutions to ecological and social problems.


This perspective resonates with non-Western traditions, including indigenous Philippine knowledge systems that emphasize relational intelligence with nature, viewing cognition as intertwined with communal life and environmental harmony rather than individual abstraction. Intelligence, in these traditions, is likewise understood as situated, responsive, and embedded in lived relationships.


Anthropology therefore frames intelligence as embodied, social, and evolved—something lived, shaped by bodies in motion, relationships, emotional stakes, and evolutionary history. De Waal often warned against anthropodenial, the refusal to recognize human-like capacities in animals. In the current era, we face the inverse problem: a rush toward anthropomorphism that obscures the fundamental differences between biological survival and machine calculation.


Psychology offers a complementary definition of intelligence, articulated through a different theoretical tradition. Rather than focusing on species comparisons, psychological approaches have long emphasized learning and adaptation as the core features of intelligent behavior.


For Jean Piaget, intelligence is not a fixed trait that individuals possess; it is something that develops. Piaget showed that cognition emerges through interaction with the environment. Learning occurs through action, error, feedback, and adjustment. Children do not simply absorb knowledge; they construct it by engaging with the world, testing expectations, and revising their understanding over time. In this framework, intelligence is inseparable from experience. Sensorimotor interaction precedes abstraction, and thought grows out of doing. Without engagement with an environment, there is no learning—and without learning, there is no intelligence.


Robert Sternberg reinforced this view by redefining intelligence not in terms of test performance, but in terms of adaptation. Sternberg described intelligence as the ability to adapt to, shape, and select environments. Intelligence, in this sense, is judged by how learning improves an organism’s fit with the world it inhabits. This definition shifts attention away from outputs and toward consequences. Learning matters only insofar as it changes behavior in ways that improve adaptation. Intelligence, therefore, is functional, contextual, and grounded in real situations rather than abstract computation.


The Intelligence in AI


Taken together, psychology and anthropology converge on the same conclusion: intelligence is defined by learning from experience and adapting behavior over time. Viewed through this lens, the limits of large language models become clear. They have no bodies. Without bodies, there is no proprioception, effort, fatigue, or spatial orientation. Even as sensors are added to robotic frames, they lack the affective feedback loop—the 'feeling' of physical risk—that anchors biological cognition. They do not sense the world; they receive inputs already filtered, labeled, and encoded by human systems. They do not perceive. There is no figure and ground, no salience shaped by survival, no distinction between noise and signal grounded in consequence.


They have no affect. Nothing matters to them. While an AI system may be designed to optimize an objective function or maximize a reward signal, these are extrinsic mathematical goals specified by engineers. They lack the intrinsic survival pressures—hunger, pain, social rejection—that make intelligence a necessity for living beings rather than an optimization exercise. There is no fear, care, motivation, or loss. And there are no social stakes. AI systems do not form relationships, carry reputations, or experience outcomes that shape future behavior.


Placed after a proper definition of intelligence, artificial intelligence becomes easier to understand without anthropomorphism. Contemporary AI systems are powerful tools for pattern detection, synthesis, and decision support. They learn statistically from vast datasets and can meaningfully augment human judgment. But their learning is not experiential. They do not inhabit ecological or social worlds, nor do they adapt through lived consequence.


Confusing prediction with intelligence risks misplaced trust and inappropriate delegation of responsibility. The question is whether we are intelligent enough to know the difference.

 

More The Forensic Lens