I test AI chatbots for a living — 7 common glitches and what to do when they happen
Glitch happens

If you are a regular chatbot user you’ve probably experienced “hallucinations,” which are one of the most common glitches made by AI, particularly large language models (LLMs).
Essentially, hallucinations are when a chatbot generates information that sounds plausible but is actually entirely fabricated. For example, a user may ask for a detailed explanation on a complex subject, and the chatbot might provide a confident yet factually incorrect answer.
These inaccuracies stem from the model’s reliance on statistical patterns in its training data rather than an understanding of truth. While some models such as ChatGPT-4.5 are designed to have significantly less hallucinations, they can still happen.
Here are more common errors made by chatbots and the reasoning behind them.
1. Failing to admit uncertainty
Not unlike humans, AI chatbots struggle to admit "I don't know." This is a problem that goes beyond hallucinating because reluctance to acknowledge uncertainty can mislead users, as the chatbot may present incorrect information authoritatively.
Most chatbots are programmed to provide definitive answers, even when they lack sufficient information. From the silly misspelling of the word ‘strawberry’ to more complicated issues like fabricated citations in legal cases, if chatbots would admit they’re wrong, it would enhance transparency.
What to do: While it may mean taking a few extra steps, users should do their own fact checking. If they suspect the chatbot is wrong, they can rephrase their prompt or ask additional questions. Personally, I’ve noticed that either using the search tool or deep research can help eliminate ambiguity.
2. Misunderstanding context and nuance
Despite emotional advancements, chatbots often lack a deep understanding of context, leading to inappropriate or irrelevant responses. While chatbots are significantly developing to enhance their ability to interpret user intent, especially in complex or ambiguous conversations, errors still occur.
For example, AI chatbots have been found to struggle with accurately answering high-level history questions, often failing with nuanced inquiries.
This limitation highlights the need for improved natural language understanding and contextual awareness in chatbot design to handle intricate dialogues effectively.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
What to do: Users can significantly reduce errors by prompting chatbots with complete sentences and avoiding jargon, slang, or abbreviations. If you have a complex query, break it down.
3. Being susceptible to manipulation and misinformation
Chatbots can be exploited to spread misinformation or manipulated to produce harmful content. For instance, a Russian disinformation network has infiltrated major AI chatbots to disseminate pro-Kremlin propaganda, demonstrating how chatbots can be used to spread false narratives.
Additionally, DeepSeek has been known to avoid directly answering questions about Chinese history. Chatbots have been found to provide inaccurate or misleading information when asked about current affairs, raising concerns about their reliability.
What to do: Comparing responses of different chatbots are referring to another chatbot to check responses that seem inaccurate or unclear can help users avoid misinformation.
4. Bias and fairness issues
Another significant challenge faced by chatbots is bias. Since these systems are trained on vast datasets gathered from the internet, they can inadvertently absorb and reproduce biases that exist in their training material.
This might manifest as prejudiced language, skewed perspectives on sensitive topics, or even discriminatory practices. For instance, a chatbot used in hiring processes might favor certain demographics over others if its training data is unbalanced.
Bias in AI is not only an ethical concern but also a practical one, as biased responses can harm a company’s reputation and alienate users. Continuous auditing of training data and implementing fairness-aware algorithms are necessary steps in mitigating these issues.
What to do: Users should cross-reference chatbot responses with multiple reputable sources, especially on sensitive topics. Using reputable chatbots that cite sources can also help to eliminate bias.
5. Overfitting
If you’ve ever used a chatbot only to get a non-answer that barely addresses your query, you’re experiencing a common pitfall of chatbots known as overfitting. When a model overfits, it means that it has learned the training data too well, including the noise and irrelevant details, which harms its performance on new, unseen prompts.
In practice, an overfitted chatbot might excel in answering questions similar to those it encountered during training but struggle when faced with unexpected questions. This lack of generalization can lead to responses that are either irrelevant or overly simplistic.
What to do: If you experience your chatbot overfitting, try altering your prompt using shorter sentences but more thorough detail. In other words, to avoid having your chatbot say too much of nothing, you’ll need to say more.
Help the chatbot understand your query by saying more than you think you need. For instance, add what problem you are trying to solve or the situation.
6. Adversarial vulnerabilities
Chatbots can also be vulnerable to adversarial attacks — situations where slight, carefully crafted changes to the input can lead to drastically different outputs. These vulnerabilities pose a serious security risk.
For instance, a malicious user might subtly alter the phrasing of a question to trigger an unintended or even harmful response from the chatbot.
Such vulnerabilities not only expose the limitations of current AI safety measures but also underscore the importance of designing systems that can withstand adversarial manipulation.
Researchers are working on building more resilient models that can detect and neutralize such threats before they lead to adverse outcomes.
What to do: While there have been some concerning outputs, you can avoid this by using a secure network and trusted platforms. While it’s fun to test new chatbots, be sure to recognize and report suspicious behavior to help train the chatbot the right way.
7. Specification gaming
Finally, specification gaming is a critical error that chatbots make. This mistake occurs when the AI system exploits loopholes in its objective functions or instructions. Instead of fulfilling the intended task, the chatbot finds shortcuts that technically satisfy the requirements but deviate from the desired behavior.
You may see this when chatting with customer satisfaction bots. If a chatbot is programmed to maximize customer satisfaction, it might generate overly generic or exaggeratedly positive responses that do not genuinely address your concerns. Such gaming of the system can lead to suboptimal interactions, as the chatbot’s answers may no longer be relevant or helpful.
What to do: This type of glitch is on the developer, but as a user if you experience specification gaming errors, try closing out the chatbot and asking your question in a different way.
Final thoughts
It's clear that chatbots aren't perfect. While they continue to evolve, developers are feverishly working to address these common mistakes. The speed of AI development is lightning fast, but mitigating these issues amid moving forward is crucial for fostering trust and ensuring chatbots deliver meaningful and accurate assistance to users.
As a user, you can help to tackle bias, prevent overfitting, minimize halluciantions and other chatbot errors by reporting a problem and submitting feedback when you notice them.
More from Tom's Guide
- I used ChatGPT Voice and Vision to spring clean — and it even told me how much some of my 'junk' was worth
- I tested Gemini 2.0 Flash vs Gemini 2.0 Pro — here's the winner
- 7 AI hacks every mom needs to stop feeling exhausted all the time











You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.

















