top of page

Uncharted territories in the evolution of LLM AI

  • Writer: Websters International
    Websters International
  • Jan 7
  • 2 min read

Updated: Jan 16

We look at the challenges that generative AI technology needs to overcome.


Drawing of a brain with circuit boards inside

by Michael Stevens


For a technology that threatens to be ubiquitous and even an existential threat to the future of humanity, it’s hard to believe that ChatGPT was only released in November 2022. The initial release sparked an undeniable wave of excitement, promising to revolutionise communication and problem-solving. I reflect on my Christmas Eve 2022 exposition to the technology by my then enthusiastic 20-year-old computer-engineering-student nephew somewhat differently now than I did at the time. It then seemed a harmless plaything, a party trick. Yet the longer the technology has been around, the more its possibilities – and limitations – have become clear.


ChatGPT and its competitors such as Google Gemini have demonstrated potential applications across diverse fields, reportedly being used to aid productivity in education (teachers and students, with dubious results for the latter), customer service, content creation, healthcare and beyond. It is suggested to have the ability to assist students, aid in customer interactions, and potentially contribute to generating content in several industries.


Amid the promise, it is imperative to address the very real challenges users and consumers of ChatGPT and other forms of LLM AI face. The technology is impressive in its ability to apparently understand text or create imagery or code on instruction, but it still has far to go. Regular instances of hallucinatory and misleading outputs, threats of misinformation propagation on the internet, and limitations in deep, contextual understanding are current issues that cannot be overlooked. And this is before we get to the ethical implications about privacy, issues of bias (and reports suggest the technology is much clumsier in languages other than English), the lack of transparency, reliance on tool training that is environmentally significant in terms of carbon footprint, and in the longer run, the threat of workforce deskilling and mass unemployment.


Many people and industries have committed themselves to working on these limitations. Undoubtedly LLMs are here to stay and will be used by the unscrupulous long before the technology is finally fit for a broader purpose. Right now, we are all using the least refined form of LLM AI we will come across. To draw an analogy with the progress of the internet, we are not even in the AOL foothills and a long way from the Google moment – or perhaps put another way: while today practitioners struggle with unique prompts to get half usable text, one day there’ll be services online that mean you won’t even think about what is underneath the bonnet.


Believe me, such a moment will come: at one point in the 1990s Websters worked on a product called Microsoft AutoRoute, a CD-ROM designed to give you directions on your desktop PC. You couldn’t carry a PC around with you and even early laptops were not much less bulky – but, when was the last time you used a paper map?


For more information on Websters' AI quality assurance service, see What Is ADVISOR? or contact us.

bottom of page