A new AI feature from Grammarly has caused a lot of anger across academic circles after researchers noticed the system offers manuscript feedback in the voice of real scholars, including academics who have died.
The tool, called “Expert Review”, allows writers to upload a paper and ask the software to review it using insights modelled on recognised scholars in the field. The company says the feature can help writers “meet the expectations of your discipline and your project by drawing on insights from subject matter experts and trusted publications.”
Users can also allow the system to rewrite sections based on the advice. Grammarly’s site says: “Revise the draft yourself or let Expert Review rework things for you.”
The controversy began after medieval historian Verena Krebs from Ruhr University Bochum noticed the system offering feedback under the name of historian David Abulafia, who died in January.
Krebs wrote online that Grammarly now “offers to summon colleagues, both living and dead, to ‘expert review’ the piece”, a discovery that quickly spread through academic networks.
Why Do Scholars Say The Tool Crosses An Ethical Line?
Academics say the problem goes far beyond a writing tool. The criticism is more so on the use of real names and reputations without permission.
Vanessa Heggie, associate professor at the University of Birmingham, wrote in a LinkedIn post: “Grammarly is now offering ‘expert review’ of your work by living and dead academics. Without anyone’s explicit permission it’s creating little LLMs based on their scraped work and using their names and reputation.”
The reaction across academic social media was expected.
Historian Claire E. Aubin wrote on Bluesky: “I have seen a lot of cursed stuff in my time in academia but this is among the most cursed.”
Kathleen Alves described the feature in even stronger terms. “This is literally digital necromancy,” she wrote.
Hisham Zerriffi from the University of British Columbia agreed with that point. “NecromancerLLM. Seriously, dead or alive, this is just wrong.”
For critics, the worry here has to do with the idea that a software platform can simulate scholarly authority under a real person’s name.
More from News
- The UAE’s Startup Boom Was Built On Stability. Could Regional Conflict Now Test Founder Trust?
- Scammers Are Using Fake Google Security Checks To Steal User Passwords, Reports Find
- How Is Modern Tech Leading To “Smart Ships”, And How Does It Affect Insurers?
- Industry Reactions To The 2026 UK Spring Budget Statement
- What Will Rachel Reeves Announce In The 2026 Spring Statement? Experts Share Their Predictions
- Middle East Conflict Hits An AWS Data Centre – Can Cloud Platforms Withstand Geopolitical Disruption?
- How Does VCT Funding Influence Scaling Businesses And The UK Economy?
- Is There A Link Between Wet Weather And UK Workers’ Productivity?
How Might The System Generate These Reviews?
The technology behind the feature appears to learn from publicly available academic writing and research.
LLMs learn patterns from books, journal articles and online material. In this case the system appears to analyse a scholar’s published work and produce comments that resemble how that person might respond to a manuscript.
One explanation circulating online says the system may rely on a technique known as persona prompting. This method instructs the AI to answer questions through a specific voice based on known information about a scholar’s work and research themes.
A user named Litbowl wrote that Grammarly may not have created individual models for each academic. Instead the system may rely on public descriptions and summaries of a scholar’s work. Litbowl said the result is “both dumber and even weirder” than creating smaller models trained directly on a scholar’s writing.
Even with that explanation, academics say the core problem is still there. A machine generated comment presented in the voice of a real scholar risks misusing that person’s name and reputation.
What Does This Dispute Say About AI In Research And Education?
The dispute arrives during an unsettled period for universities adjusting to generative AI.
LLMs depend on huge collections of books, research papers and online writing. Authors often never gave permission for that material to become training data.
Scholars worry about two issues. Their research feeds the systems, and their names may appear attached to machine generated advice.
Grammarly has also introduced an “AI grader agent” that estimates how a student’s teacher might score an assignment. The system searches publicly available information about an instructor and generates feedback that mirrors those preferences.
Educators fear that tools like this could turn writing into an exercise in pleasing an algorithm’s guess about a professor’s expectations.
Academic peer review has long relied on real colleagues reading and evaluating work. When an algorithm produces comments under the name of a scholar, critics say the authority attached to academic expertise becomes harder to distinguish from software imitation.