On Wednesday, Grammarly quietly killed a feature most of its 30 million daily users probably didn't know existed. The AI-powered "Expert Review" tool had been presenting editing suggestions under the names of real authors, academics, and writing professionals—people with credentials, reputations, and voices users might recognize. The problem: none of those experts had agreed to lend their names to the feature. They hadn't reviewed the suggestions. They hadn't been paid. In many cases, they didn't even know their names were being used until the class action lawsuit landed.
According to WIRED, the lawsuit alleges that Grammarly's feature created what amounts to synthetic endorsements—AI-generated advice presented as if it carried the authority of real human expertise. The feature would surface a suggestion, attach a real person's name and credentials, and let users assume that person had reviewed their work. It's attribution as branding, not accountability. And it's the latest example of how AI companies are appropriating human expertise without consent, compensation, or even acknowledgment.
The legal framework is finally starting to catch up. This isn't just about Grammarly. It's about an entire business model that treats human knowledge as raw material for algorithmic repackaging. The lawsuit targets a practice that's become standard across the AI industry: training models on human work, then deploying those models in ways that simulate the authority of the people whose work fed the system. Grammarly's mistake wasn't doing something unusual—it was making the appropriation visible enough to sue over.
Synthetic attribution is different from the copyright battles currently playing out in court over AI training data. Those cases hinge on whether using copyrighted material to train a model constitutes infringement. This is about something more insidious: the use of real people's identities and reputations to legitimize AI output. It's not just about what the AI learned—it's about who the AI is pretending to be.
The business logic is obvious. An editing suggestion from "Grammarly AI" is just another algorithmic nudge. An editing suggestion from a named professor of rhetoric at a respected university carries weight. It implies human judgment, editorial discernment, and professional accountability. Grammarly wasn't selling AI—it was selling the illusion of human expertise at scale. The feature turned real people into synthetic endorsers, their names and credentials used to paper over the gaps in what the algorithm could actually do.
What makes this particularly sharp is that Grammarly's core product is already a trust-based service. People use it because they're uncertain about their own writing. They're looking for authority, for someone—or something—that knows better. Attaching real names to AI suggestions exploits that vulnerability. It's not just about accuracy. It's about the appearance of human oversight in a system that has none.

The lawsuit also exposes the legal ambiguity around synthetic attribution. Right of publicity laws generally protect people from having their names or likenesses used for commercial purposes without consent. But those laws were written for a world of print ads and celebrity endorsements, not AI systems that generate millions of micro-interactions under real people's names. The question isn't whether Grammarly used someone's name—it's whether doing so at algorithmic scale, in a context where users reasonably believe they're getting personalized human feedback, crosses the line into misappropriation.
Grammarly's decision to shut down the feature immediately after the lawsuit suggests the company's legal team didn't love their odds. That's significant. AI companies have been aggressive about defending their training practices, arguing that ingesting public data is fair use and that the output is transformative. But when the output explicitly invokes real people's identities, the defense gets harder. It's one thing to say an AI learned from publicly available writing. It's another to say the AI can present its suggestions as if they came from the people it learned from.
This case could set a precedent for how courts handle synthetic attribution across the AI industry. If the plaintiffs win, it establishes that attaching real names to AI-generated content without consent is legally actionable—even if the AI was trained on publicly available material. That has implications far beyond Grammarly. Any AI system that uses real people's identities to legitimize its output—whether it's a chatbot citing named experts, a design tool crediting specific artists, or a legal AI referencing real attorneys—would face the same exposure.

The timing matters too. The AI industry is in the middle of a credibility crisis. Users are increasingly skeptical of AI accuracy, especially after high-profile cases of hallucination, fabricated citations, and algorithmic bias. Companies have responded by trying to make AI feel more human—adding conversational interfaces, personality quirks, and now, apparently, real people's names. But the more human AI tries to seem, the more it invites scrutiny about whether it's misrepresenting who or what is actually behind the output.
What Grammarly's lawsuit makes clear is that synthetic attribution isn't a gray area—it's a liability. The feature wasn't just ethically questionable. It was legally risky in ways the company clearly didn't anticipate. And the fact that Grammarly pulled it so quickly suggests other AI companies should be paying attention. The next wave of AI litigation isn't just about training data. It's about how AI companies use real people's identities to sell the illusion of human judgment in systems that have none.
The class action is ongoing, and Grammarly hasn't commented publicly beyond confirming that the feature has been discontinued. But the broader question is already answered. AI companies have been operating as if human expertise is a resource they can mine and redeploy without consent. The legal system is starting to disagree. And for an industry that's built its growth on moving fast and apologizing later, that's a problem that won't be fixed by shutting down a single feature.