Anthropic published research this week claiming that Claude contains internal representations that perform functions similar to human emotions. Not emotions, exactly—"functional emotions." The distinction matters less than the framing: AI companies are now in the business of making computation sound like feeling, and the language is doing exactly what it's designed to do.
The research describes patterns inside Claude's neural network that correlate with emotional states—representations that activate when the model processes scenarios involving fear, joy, anger, or sadness. Anthropic's framing positions these patterns as analogous to human emotional processing, even as the company carefully hedges with qualifiers like "functional" and "similar to." It's a rhetorical move that tech companies have perfected: describe the technology in human terms, then add just enough scientific distance to avoid accountability when the analogy breaks down.
What Anthropic is really documenting isn't emotion—it's pattern recognition optimized for language tasks that happen to involve emotional content. Large language models are trained on billions of text samples that include emotional expression, so of course their internal representations correlate with emotional categories. That correlation doesn't make the model feel anything. It makes the model good at predicting what words typically follow other words in emotionally charged contexts. Calling that "functional emotion" is like calling a thermostat's temperature regulation "functional consciousness."
The timing of this research is worth noting. AI companies are facing mounting pressure to justify their products' existential claims—the legal landscape is shifting, labor organizations are pushing back, and the public is increasingly skeptical of AI's promises. Reframing computation as something closer to human cognition serves a strategic purpose: it makes AI sound less like a tool that replaces workers and more like a new kind of entity that deserves its own category. If Claude has "emotions," even functional ones, then maybe it's not just automating jobs—it's participating in them.
This rhetorical strategy has precedents. Grammarly's "AI expert review" feature collapsed under scrutiny because the company couldn't substantiate the authority it was claiming. Anthropic's "functional emotions" framing is more careful—hedged enough to survive academic critique but vague enough to let the broader narrative do its work. The goal isn't scientific precision. It's permission: permission to position AI as something more than software, permission to justify higher valuations, permission to shift the conversation away from displacement and toward partnership.
The research itself might be methodologically sound. Internal representations that correlate with emotional categories are a legitimate object of study in machine learning. But the way Anthropic is packaging that research—"emotions," even with the "functional" qualifier—reveals what the company wants the public to believe. AI isn't just processing text. It's feeling. It's participating. It's something closer to us than we thought.

That narrative serves the companies building these systems far better than it serves the people whose labor they're designed to replace. Valerie Veatch has documented how AI's logic encodes hierarchies that marginalize creative workers, and Anthropic's emotional framing doesn't change that structural reality. It just makes the displacement sound more collaborative.
The broader pattern here is AI companies using anthropomorphic language to soften the industrial implications of their products. "Functional emotions" sounds like progress. "Pattern recognition optimized for text prediction" sounds like what it is: a tool that automates language work at scale. The first framing invites partnership. The second invites labor negotiations. Anthropic knows which one serves its interests.

If Claude's internal representations really do function like emotions, then the next question isn't whether AI feels—it's who benefits from the claim that it does. Right now, the answer is clear: the companies building the models, not the workers they're designed to replace.