What people told us about AI-generated language
- Apr 16
- 3 min read
Updated: 7 days ago
Despite the growth in use of Large Language Models, our contacts in the language and content industry told us that the case for human expertise has only grown stronger.

“Good enough” isn't good enough
When we launched the research underpinning our new white paper, surveyed at the beginning of the year and published this month, we'll admit to a quiet hope: that we would find more discomfort with the linguistic output of Large Language Models than the public conversation suggests. For the past few years we have watched talented editors, writers and translators face a cultural and economic shift that has systematically undervalued their expertise. We have seen skilled linguists reflecting the popular view that AI can “pretty much do your job now”, and we have seen the downstream effects on careers and incomes. As a company we have deep roots in the precision of language and knowledge as well as long‑standing partnerships with these professionals. So yes, we had reason to hope the data would support the case for human expertise, and what our survey actually showed surprised us.
AI isn't reducing workload
There is a persistent belief that AI content generation saves time and cuts costs. Our findings tell a different story, where AI cannot reliably deliver brand-level quality, cultural appropriateness, factual accuracy or the human meaning that high-trust content requires every time. As one editor put it, “It's actually creating more work for people – that's where the friction is at the moment”.
Why tone of voice remains the most consistent failure
For organisations that depend on a precise voice, trusted communications or culturally sensitive content, tone is a strategic concern. And it is one that AI does not handle reliably at scale. The underlying reason is that tone is contextual, relational, cultural, situational and dependent on brand memory – differentiated through micro-choices at the phrase level. AI's generic patterns flatten these distinctions, creating content that is not incorrect per se, but is general and undifferentiated; in short, it’s bland and failing to achieve the most basic requirements of high-quality copy.
“Good enough” is a dangerous standard
The phrase “good enough” surfaced repeatedly in our research, most often from professionals reflecting on leadership pressure to adopt AI rapidly. But the findings suggest that output described as good enough is a chimera. The people tasked with making AI usable spoke of hidden work, constant revision cycles, subtle but consequential errors and factually plausible statements that turn out to be wrong.
Pushing back on the built-in issues
A common assumption is that poor output can be fixed with better prompts. Our respondents suggest that this misses the point. The issues they encounter are not the result of weak prompts, insufficient training or unfamiliarity with the tools. They stem from the programming of the models themselves: a tendency to confidently invent facts, an inability to maintain consistent voice across longer documents, a lack of genuine context understanding and inability to handle nuance. AI may be a remarkable technology but it is not a dependable writer, editor or translator.
The genie is out of the bottle
We aren’t suggesting AI can be rolled back. The technology has genuine potential across many fields. But in language‑based professions – the ground Websters stands on – the risks are real and underacknowledged: reputational damage, inconsistent messaging, inappropriate tone, factual inaccuracies that escape review and, in the worst cases, legal exposure from misleading or erroneous content.
Is there a perception problem where senior leaders are shown polished demonstrations of what AI can produce under ideal conditions? This white paper reflects what happens when organisations deploy it at scale, with real content, real brands and real deadlines.
The case for human expertise has only grown stronger
Our findings reinforce something we have always believed: linguistic professionals are a safeguard that enables and unlocks the true power of AI. While AI can help draft, trained humans can ensure the clarity, accuracy, tone, cultural nuance and brand consistency that serious communications demand. If anything, the hidden workload AI introduces has made human intelligence, in the form of expert editors and translators, more essential.
Want the full picture?
Our thanks to everyone who contributed to this research. Your insights shaped a report we hope will prompt honest conversations about how AI is genuinely performing in the field of language services. The full white paper, including participant quotes and detailed analysis, is available to download from our website here.

