Can ChatGPT spell? A small experiment with surprising results
- joannama8
- Oct 30
- 3 min read
Exploring trends and recent articles on AI’s language quirks, Joanna from our marketing and design team put ChatGPT to the test with spelling and letter challenges to see just how well it handles the little details.

It all started with a viral trend. After reading an article in The Guardian and watching a few short videos about the uses and limitations of ChatGPT, I noticed people online were setting it playful challenges. One of them caught my eye. Instead of asking it to write stories or solve problems, users were asking it to count letters in words and names. The results were sometimes spot on, but often surprisingly off.
Naturally, I decided to try it myself.
As a native Brazilian Portuguese speaker, I began with something that had already caused confusion in a Portuguese video I had seen. I asked ChatGPT to list all 26 Brazilian states, plus the Federal District, and identify which ones did not contain the letter “A”. The answer came back confidently formatted and full of emoji-like ticks. It proudly concluded that Pernambuco, Sergipe and Rio de Janeiro were the only ones without an “A”.
Except, of course, Rio de Janeiro and Pernambuco do have “A”s.
It was both funny and intriguing. ChatGPT can summarise research papers and hold a discussion about philosophy, yet it seemed to struggle with a task that a child could do by glancing at a page. How could that be?
To understand this, it helps to know how ChatGPT actually “reads”.
When we look at a word, our eyes see individual letters. ChatGPT does not. It processes text as units of data called tokens, predicting patterns in language rather than visually identifying individual characters. A token might represent a full word, a fragment of a word or sometimes even a single character. The model predicts the next token based on the patterns it has learnt from enormous amounts of written text. It is interpreting language, not visually analysing letters.
So when you ask it to find words without an “A”, it is reasoning through linguistic patterns rather than counting the actual letters. If accents or spacing come into play, or if the language mix is unusual, it can easily misread what we mean.
To see if this behaviour appeared in English too, I gave it another test. I asked ChatGPT to list the county towns of England and identify which ones did not contain the letter “A”. This time, it seemed more confident and initially accurate. It correctly listed 11 towns with no “A”: Bedford, Beverley, Chester, Dorchester, Gloucester, Hereford, Oxford, Shrewsbury, Truro, Worcester and York.
But then came the twist. When I asked for the towns/cities that did contain the letter “A”, ChatGPT proudly included London, Leicester and Bristol, which do not contain an “A” at all.
It was a gentle reminder that ChatGPT is not scanning text the way we do. It is predicting patterns in words and meaning, not literally detecting letters. For tasks involving precise letter recognition or counting, it can still slip up.
Artificial intelligence is evolving at an incredible pace, but this small experiment shows why humans need to be part of the process. We bring the detail, the cultural context and the careful checking that AI is not yet able to replicate. Perhaps that is what makes the experiment compelling: final accuracy can only be guaranteed when humans remain engaged, applying their attention to detail and cultural insight alongside the technology.
Videos I watched: https://youtube.com/shorts/7pQrMAekdn4?si=d2_1zCpxYxxdu-ZY https://www.youtube.com/shorts/l9qlLvHwGX8
Article I read: https://www.theguardian.com/australia-news/2025/aug/08/openai-chatgpt-5-struggled-with-spelling-and-geography
Contact Websters to find out how we can work with you to ensure your business communicates with cultural intelligence in international markets.


