The Danger of Humanizing AI

Is it reasonable to speak about artificial intelligence and other technology in human terms? How does this affect our relationship with technology and ourselves?

It is a tragic yet predictable condition of contemporary humanity that we attempt to define our value as people using measures of technology such as efficiency and productivity. On February 20, OpenAI CEO Sam Altman gave what he probably assumed was just another interview about generative AI’s fast-expanding capabilities. Speaking on the sidelines of the India AI Impact Summit, a reporter from The India Express asked Altman to address recurring criticisms that, as AI use expands, these tools use too much energy and water and so constitute an environmental threat.

Altman pushed back against claims that AI wastes water but gave more credence to its energy impacts, asserting that as global use of AI tools rises, “we need to move towards nuclear or wind and solar very quickly.”

“Humanizing” Machines

But it was Altman’s choice of analogy that later proved most newsworthy. In defending generative AI’s energy consumption, Altman made a surprising comparison:

One of the things that is always unfair in this comparison is people talk about how much energy it takes to train an AI model … But it also takes a lot of energy to train a human … It takes like 20 years of life , and all the food you eat before that time, before you get smart. The fair comparison is if you ask ChatGPT a question, how much energy does it take once a model is trained to answer that question, versus a human, and probably AI has already caught up on an energy efficiency basis, measured that way.

Altman is hardly the first to frame LLMs in human terms. After all, many users tend to slip into anthropomorphizing language (terms that apply human attributes to non-human objects or entities) when conversing with chatbots because, in the words of AI expert Dr. Emily Bender, “we haven’t learned how to stop imagining a mind” behind generative AI tools. That’s probably not our fault; many experts have argued that LLMs invite anthropomorphizing by design. But they’ve also cautioned that while using human-like language makes chatbots seem warm and invites users’ trust, it also fosters “anthropomorphic seduction,” or the allure of digital conversation agents that are indistinguishable from human agents. Anthropomorphic seduction, in turn, creates risks for deception and manipulation. The rise in cases of reported “AI psychosis,” or a state in which intensive anthropomorphized interactions with a chatbot creates or reinforces delusional thinking, emphasizes the reality of such risks.

But despite how widespread anthropomorphic thinking around AI tools has become, Altman’s analogy created swift public backlash. Some were angry about the inaccuracies of the comparison, arguing that such an analogy is “misguided” because humans and machines consume energy very differently. But many decried the very act of comparing an LLM to a child learning and growing into adulthood as positively dystopian, arguing that Altman’s assertion that “a really big spreadsheet and a baby are equivalent” is evidence that he, and others who would make such a comparison, “should not be allowed a job that in any way impacts other humans.”

The dehumanizing nature of Altman’s analogy has implications beyond mere creepiness. The Atlantic’s Matteo Wong, in an article titled “Sam Altman is Losing His Grip on Humanity,” points out that Altman’s comparison is actually an ideological commitment, one that puts humans and machines on the same moral plane. That positioning, in turn, can subtly shape how generative AI systems are designed, funded, and justified, as well as how much environmental and social cost we are willing to tolerate on their behalf. For example, Wong points out that Anthropic is currently:

… studying whether its chatbot, Claude, is conscious or can feel distress, and allows Claude to cut off “persistently harmful or abusive” conversations in which there are “risks to model welfare”—explicitly anthropomorphizing a program that does not eat, drink, or have any will of its own.

Wong is on to something when he points out that anthropomorphized comparisons like Altman’s take root and shape our actions. Altman’s comparison is a form of metaphor, a type of analogous reasoning that, in rhetorician Kenneth Burke’s words, “helps us see and understand something in terms of something else. It brings out the thisness of a that, or the thatness of a this.” But while we might assume that metaphors are mere literary devices that create a stylistic flourish, philosopher Mark Johnson and cognitive linguist Greg Lakoff argue that our whole conceptual system is inherently metaphorical: we understand abstract ideas by mapping them onto familiar concrete experiences. For instance, we routinely treat emotional states in spatial terms (i.e., happiness is “up” and sadness is “down”); we understand time in monetary terms (we “spend”, “save”, or “waste” it); and we speak of psychological and emotional burdens in physical terms (we “carry” emotional “baggage” or “shoulder” responsibility).

How Our Words Change How We Think

This matters, Johnson and Lakoff argue, because metaphors do more than decorate our language; they organize our experience. The concepts that structure our thinking also shape what we perceive, how we move through the world, and how we relate to other people. As they put it, “the way we think, what we experience, and what we do every day is very much a matter of metaphor.” Yet we are rarely aware of this influence. Our conceptual systems operate automatically, beneath our conscious notice, which makes the power of metaphor easy to overlook.

For instance, consider the taken-for-granted metaphor “argument is war.” We say someone “attacked” a position, that a claim is “indefensible,” that we “won” or “lost,” or that a criticism was “right on target.” But, as Johnson and Lakoff emphasize, we do not merely talk about arguments this way; we structure the activity of argument around the logic of battle. We treat the person we disagree with as an enemy. We defend our ground, launch counterattacks, plan strategies, and retreat from weak positions. Though no physical confrontation takes place, the very activity of arguing is organized like one, even though arguments and war are fundamentally different things.

Because “argument is war” is a foundational concept in American thought, it can be difficult to imagine alternatives. But Johnson and Lakoff invite us to consider what might change if we swapped “war” for “dance.” In an “argument is dance” framework, participants would respond to one another’s movements and aim for coordination and balance rather than victory. If “argument is dance,” then the goal would not be to defeat an opponent through domination but to sustain a shared exchange by listening and responding. This shift in metaphor would soften our language, but it might also reshape how we conduct public debates, design civic forums, teach students to argue, and measure what counts as a “successful” exchange.

A Cold, Soulless Machine

Reflecting on Johnson and Lakoff’s conceptual theory of metaphor helps explain why so many people recoiled at Altman’s comparison of essentially equating the human soul with an air-conditioned server farm. Raising a child becomes comparable to scaling a model; food is treated like electricity; education becomes energy input; and human growth is reduced to a training cost. In trying to humanize the machine, the metaphor also mechanizes humans by turning human development into a production process concerned with efficient outputs and high returns on investment.

But where Altman’s metaphor collapses the distinction between human development and machine training, conceptual metaphor theory suggests that an alternative metaphor could restore it. Instead of imagining an LLM as a child, we might think of it as infrastructure, more like a power grid than a developing mind. A power grid consumes energy and shapes daily life, but we do not mistake it for a living thing or worry about its “welfare.” We evaluate it in terms of reliability, environmental impact, and public benefit. And because it is public infrastructure, we expect oversight. We ask who built it, who regulates it, who profits from it, and who bears the costs when it fails. Framing LLMs this way strips the machine of borrowed humanity and returns moral agency to humans, positioning us not as comparable entities competing for resources but as the designers and regulators who are responsible for how these systems operate and at what cost.