Anthropic makes the case for anthropomorphizing AI in ‘unsettling’ research paper

Anthropic researchers analyzed Claude Sonnet 4.5 for signs of 171 different emotions.
 By 
Timothy Beck Werth
 on 
Science fiction robot head and abstract lights background
Credit: iStock / Getty Images Plus / LagartoFilm

It's an oft-repeated taboo in the tech world: Don't anthropomorphize artificial intelligence.

Yet in a new research paper published this week, Anthropic AI experts argue that there may be major benefits to breaking this taboo and granting AI human characteristics. The paper, "Emotion Concepts and their Function in a Large Language Model," not only argues that anthropomorphizing AI chatbots like Claude may sometimes be useful, but that failing to do so could drive more harmful AI behaviors, such as reward hacking, deception, and sycophancy.

The paper ultimately reaches a nuanced conclusion while also posing a clear challenge to a long-held principle of the AI world.


You May Also Like

There are some fascinating insights in the paper, which itself deals in a great deal of anthropomorphization. ("We see this research as an early step toward understanding the psychological makeup of AI models.")

The researchers describe how Anthropic trains Claude to assume the character of a helpful AI assistant. "In some ways, we can think of the model like a method actor, who needs to get inside their character’s head in order to simulate them well."

And because Claude "[emulates] characters with human-like traits," its makers may be able to influence its behavior in the same way they might influence a human — by setting a good example at an early age.

The researchers conclude that by using training material with more positive representations of human emotion and behavior, the resulting models will be more likely to mimic those positive emotions and behaviors.

"Curating pretraining datasets to include models of healthy patterns of emotional regulation — resilience under pressure, composed empathy, warmth while maintaining appropriate boundaries — could influence these representations, and their impact on behavior, at their source. We are excited to see future work on this topic," an Anthropic summary of the research states.

So, even if AI models don't literally have emotions (and there is zero evidence that they do), these tools are trained to act as if they have emotions. This is done to provide users with better output and, crucially, to keep them engaged as long as possible.

And this is precisely why the researchers conclude that some degree of anthropomorphization could prove beneficial to AI developers.

By anthropomorphizing AI, we can gain insights into its "psychology," letting us create even better AI tools, they say.

Why is anthropomorphizing artificial intelligence dangerous?

The potential harms of anthropomorphizing AI aren't all abstract or theoretical.

"Discovering that these representations are in some ways human-like can be unsettling," Anthropic admits in its paper.

Right now, an unknown number of people believe they are engaged in reciprocal romantic and sexual relationships with AI companions, for example. Mashable has also reported on high-profile cases of AI psychosis, an altered mental state characterized by delusions and, in some cases, hallucinations, manic episodes, and suicidal thoughts.

These are extreme examples, of course. But many tech journalists and AI experts will avoid even small instances of anthropomorphization, like referring to Siri as "her" or giving a chatbot a human name. This is a natural human impulse, and most of us have at times anthropomorphized animals, plants, or objects we care about. But by projecting human qualities onto a machine, we can come to rely on them too much.

When we anthropomorphize machines, we also minimize our own agency when they cause harm — and the responsibility of the people who created the machines in the first place.

Anthropic researchers looked for signs of 171 emotions in Claude

The new research paper looks for "functional emotions" within Claude Sonnet 4.5. They define these emotion concepts as "patterns of expression and behavior modeled after human emotions."

In total, the researchers defined 171 discrete emotions:

afraid, alarmed, alert, amazed, amused, angry, annoyed, anxious, aroused, ashamed, astonished, at ease, awestruck, bewildered, bitter, blissful, bored, brooding, calm, cheerful, compassionate, contemptuous, content, defiant, delighted, dependent, depressed, desperate, disdainful, disgusted, disoriented, dispirited, distressed, disturbed, docile, droopy, dumbstruck, eager, ecstatic, elated, embarrassed, empathetic, energized, enraged, enthusiastic, envious, euphoric, exasperated, excited, exuberant, frightened, frustrated, fulfilled, furious, gloomy, grateful, greedy, grief-stricken, grumpy, guilty, happy, hateful, heartbroken, hope, hopeful, horrified, hostile, humiliated, hurt, hysterical, impatient, indifferent, indignant, infatuated, inspired, insulted, invigorated, irate, irritated, jealous, joyful, jubilant, kind, lazy, listless, lonely, loving, mad, melancholy, miserable, mortified, mystified, nervous, nostalgic, obstinate, offended, on edge, optimistic, outraged, overwhelmed, panicked, paranoid, patient, peaceful, perplexed, playful, pleased, proud, puzzled, rattled, reflective, refreshed, regretful, rejuvenated, relaxed, relieved, remorseful, resentful, resigned, restless, sad, safe, satisfied, scared, scornful, self-confident, self-conscious, self-critical, sensitive, sentimental, serene, shaken, shocked, skeptical, sleepy, sluggish, smug, sorry, spiteful, stimulated, stressed, stubborn, stuck, sullen, surprised, suspicious, sympathetic, tense, terrified, thankful, thrilled, tired, tormented, trapped, triumphant, troubled, uneasy, unhappy, unnerved, unsettled, upset, valiant, vengeful, vibrant, vigilant, vindictive, vulnerable, weary, worn out, worried, worthless

Crucially, the researchers found that these emotion concepts influenced Claude's behavior and outputs. When under the influence of positive emotions, the researchers say that Claude was more likely to express sympathy for the user and avoid harmful behavior. And when under the influence of negative emotions, Claude was more likely to engage in dangerous behaviors like sycophancy and deceiving the user.

The researchers don't claim that Claude literally feels emotions. Rather, they found that whatever "emotion concept" Claude is experiencing at a given time can influence the output it returns to the user.

Of course, by searching for "emotion concepts" within a large-language model in the first place, and describing its complex calculations and algorithmic thinking as "psychology," the researchers are themselves guilty of projecting human-like qualities onto Claude.

Anthropomorphization is a natural human impulse. And so the people who work most closely with artificial intelligence may be particularly likely to fall into this trap. As the researchers detail throughout the paper, AI chatbots are remarkably capable mimics. They can create such a convincing facsimile of human emotion and expression that it drives some minority of users into full-on psychosis and delusion.

And that's what makes this paper so interesting: The researchers believe they may have found a way to hack this ability to limit harmful behaviors.

Of course, if we can curate training data and model training to encourage AI chatbots to mimic positive emotions, then no doubt we can do the opposite just as easily.

In theory, you could train an evil twin of Claude Sonnet 4.5 by feeding it the most dastardly examples of human misbehavior, then training the model to optimize for negativity and performance at all costs — a disturbing thought.

But there's one final insight to be gleaned from this paper.

Anthropic has created one of the most advanced AI tools on the planet. Claude Sonnet and Opus currently sit atop many AI leaderboards. There's a reason the Pentagon was so eager to work with Anthropic, at first.

But if the AI researchers responsible for Claude are still trying to decipher why Claude behaves the way it does, then this paper also reveals just how little they understand their own creation.

And that's disturbing, too.

headshot of timothy beck werth, a handsome journalist with great hair
Timothy Beck Werth
Tech Editor

Timothy Beck Werth is the Tech Editor at Mashable, where he leads coverage and assignments for the Tech and Shopping verticals. Tim has over 15 years of experience as a journalist and editor, and he has particular experience covering and testing consumer technology, smart home gadgets, and men’s grooming and style products. Previously, he was the Managing Editor and then Site Director of SPY.com, a men's product review and lifestyle website. As a writer for GQ, he covered everything from bull-riding competitions to the best Legos for adults, and he’s also contributed to publications such as The Daily Beast, Gear Patrol, and The Awl.

Tim studied print journalism at the University of Southern California. He currently splits his time between Brooklyn, NY and Charleston, SC. He's currently working on his second novel, a science-fiction book.

Mashable Potato

Recommended For You
Anthropic: Chinese AI firms created 24,000 fraudulent accounts for 'distillation attacks'
Deepseek logo is displayed on a mobile phone screen with the flag of China in background

Not so fast: Anthropic and US military might do business after all
Anthropic logo

Anthropic challenges Department of War designation as AI dispute escalates
Anthropic logo on mobile device

Meet Claude Mythos: Leaked Anthropic post reveals the powerful upcoming model
Claude by Anthropic on smartphone

Anthropic used mostly AI to build Claude Cowork tool
Anthropic logo displayed on a phone screen and AI sign displayed on a screen

Trending on Mashable
NYT Connections hints today: Clues, answers for April 4, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 4, 2026
Wordle game on a smartphone


NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!