Grammarly's parent company Superhuman launched a feature in August 2024 that generated AI writing suggestions attributed to real journalists and experts — including The Verge editor-in-chief Nilay Patel — without asking any of them for permission. The feature, called Expert Review, has since been discontinued, but not before sparking a class-action lawsuit and one of the most pointed on-record confrontations between a tech CEO and a journalist in recent memory.
Superhuman, which rebranded from Grammarly late last year as it expanded into a broader AI productivity suite encompassing writing assistant Grammarly, document tool Coda, and an email client called Mail, counts roughly 40 million daily active users. CEO Shishir Mehrotra — formerly chief product officer at YouTube and now a board member at Spotify — agreed to be interviewed by Patel on The Verge's Decoder podcast despite knowing the Expert Review controversy would dominate the conversation.
What Expert Review Actually Did
The feature offered users writing suggestions framed as being "inspired by" specific named experts and their published works. According to Patel, a generated note bearing his name suggested he would advise writers to "raise the stakes of a headline by adding emotional or stakes-based words" — advice he described as something he has "literally never said" in over 15 years as an editor. Investigative journalist Julia Angwin, among others named without consent, filed a class-action lawsuit. The named experts included the late cultural critic bell hooks.
Mehrotra's initial response to complaints was to offer an email-based opt-out. The feature was subsequently pulled entirely. The CEO maintains the decision to discontinue it came before the lawsuit was filed and was driven by strategic misalignment, not legal pressure — a claim Patel disputed in the interview.
"You just made something up and put my name on it. There's no attribution here. This isn't anything I ever said. I'm not even sure how you would get to the idea that based on my work I would ever say anything like this."
The Attribution Argument — and Why It Broke Down
Mehrotra repeatedly described Expert Review as an attribution feature, arguing that every suggestion panel clearly disclosed it was "inspired by" a specific work and included a link back to the source. He drew a distinction between attribution — which he said is a standard internet practice — and impersonation, which he called a "very different standard" that the feature did not cross.
Patel rejected this framing. New York and California law, he noted, bars companies from using names and likenesses for commercial purposes without consent — a lower bar than outright impersonation. The feature used his name to sell software. Mehrotra declined to engage with that specific legal argument, saying he would leave the details to the courtroom. The company has stated it believes the claims in the lawsuit are "without merit."
The feature's mechanics compounded the credibility problem. According to Mehrotra, the suggestions were generated by feeding published work into mainstream large language models — the same process a user could replicate by asking ChatGPT or Claude to summarize what a named person might say. "It came right from the popular LLMs," he told Patel. The result, as Patel's example illustrated, was a confident-sounding but fabricated editorial opinion attached to a real person's name.
A Small Team, a Buried Feature, and a Big Fallout
Mehrotra described Expert Review as the work of "a product manager and a couple of engineers" — a small, largely autonomous team that he said he had not reviewed before launch. He acknowledged he only looked closely at the feature after complaints surfaced. "I came and looked at it and I said, 'This is off-strategy for us,'" he told Patel.
Usage of the feature was described as minimal, and it apparently went undetected for months before journalists at The Verge and other outlets reported on it. Mehrotra argued that low usage and quick removal should count in the company's favor. Patel was not persuaded, pointing out that the opt-out mechanism — an email address — was offered first, and that the feature was only fully discontinued after the lawsuit was filed, not before.
The exchange crystallized a wider tension: the decision-making frameworks Mehrotra described, including a proprietary anti-groupthink process he has written about publicly, apparently failed to surface the obvious concern that using journalists' names without permission would anger those journalists.
Creator Economics and the Platform Pitch
Beyond the specific controversy, the interview opened into a broader and equally pointed debate about what AI is doing to the creator economy. Mehrotra pitched Superhuman's forthcoming agent platform — called Superhuman Go — as a route for experts to monetize their knowledge directly, offering a 70/30 revenue split similar to app store models.
Patel's response was blunt: the pitch asks creators to build new revenue streams on a platform whose underlying models were trained on their work without compensation. "You're saying I need to invent some new business model as an expert," he said, "because my actual body of work has been reduced to zero value."
Mehrotra did not dispute that creators face structural pressure. He argued, drawing on his YouTube experience, that technological disruption has historically created new opportunities for those willing to adapt. He cited the 1,000 true fans model — if a creator can get 1,000 people to pay $100 a year, that generates a $100,000 business — as the template Superhuman's platform is designed to enable.
An NBC News poll cited during the interview found AI polling at -20 net favorability — below ICE and only marginally above the Democratic Party. Mehrotra attributed that figure primarily to job anxiety among non-creative workers, not to concerns about attribution or data extraction. Patel argued the two are expressions of the same underlying dynamic: a technology that captures enormous value from existing human output while returning little to its sources.
What This Means
The Expert Review episode establishes a clear precedent: using real people's names to add credibility to AI-generated content — even with disclosure language — is likely to trigger both legal action and reputational damage, regardless of whether the feature sees significant usage. For companies building AI products on top of public figures' published work, the gap between what attribution law currently requires and what creators will accept is now demonstrably a commercial risk.
