
What is an artificial intelligence hallucination?
Hallucination, as defined by the French psychiatrist Dominique Esquirol, is the perception of objects or events without any external stimuli. For instance, this could involve hearing one’s name being called when no one else hears it, or seeing objects that don’t actually exist.
Drawing from this, artificial intelligence (AI) hallucinations occur when generative AI systems detect information without a source and present it to users as though it were factual.
Such unrealistic outputs can manifest in systems like ChatGPT, which falls under the category of large language models (LLMs), or in Bard and other AI algorithms capable of performing various natural language processing tasks. AI hallucination refers to instances when an AI model generates results that are either unrealistic or misleading during tasks like data analysis and image processing. Causes of such phenomena may include training on insufficient or contradictory data, overfitting, or the inherent complexity of the model itself.
AI models operate on the data provided to them for task completion. As highlighted by entrepreneur and educator Sebastian Thrun in his 2017 TED Talk, current algorithms require extensive data for proper functioning. Large language models, in particular, are trained with massive datasets enabling them to recognize, translate, predict, or render text or other content.
How much do we trust artificial intelligence?

Many of us have enjoyed interacting with artificial intelligence chatbots, which have gained significant popularity in recent years, turning it into a fun game of asking questions. We also tend to rely on these chatbots for accurate information in professional settings and various research activities. However, there are instances when AI algorithms generate outputs that are not rooted in training data and don’t follow discernible patterns, akin to “hallucinating.”
As per an article by IBM, AI hallucinations can sometimes be likened to the human brain’s propensity to see shapes in clouds or to perceive the rugged surface of the Moon as a human face. However, with AI, these misinterpretations stem from a range of issues such as biased or incorrect training data and the complexity of the model itself.
One of the earliest and most notable instances of an AI hallucination was when Google’s AI chatbot Bard made a false statement regarding the James Webb Space Telescope. Interestingly, Google inadvertently highlighted this error by sharing it as a part of an advertisement on Twitter. Bard was asked the question, “What new discoveries can I tell my 9-year-old about the James Webb Space Telescope?”

Google’s AI chatbot Bard erroneously claimed that the James Webb Space Telescope captured the first pictures of a planet outside our solar system. However, this information is inaccurate. The first images of an exoplanet were actually captured in 2004 by the European Southern Observatory’s Very Large Telescope (VLT), while the James Webb Space Telescope wasn’t launched until 2021.

If the AI is hallucinating, does that mean it’s becoming humanoid?
As advancements in artificial intelligence rapidly continue, the longstanding comparison between AI and humans is becoming more perplexing. The term “hallucination” has been adopted to describe situations where AI detects non-existent sources and presents them as real to users. Although this does not imply that AI has become human-like, some argue that the term “confabulation” might be more appropriate since “hallucination” suggests a mental state unique to humans.
Permanent solution to prevent hallucination
Artificial intelligence can be trained using vast datasets. AI models rely on the data provided to perform tasks, making the quality of training datasets critical for determining AI behavior and output quality. To mitigate AI hallucinations, training AI models with large, balanced, and well-structured datasets could be a viable solution.
How can you block out possible hallucinations when talking to your AI chatbot?
Directing AI to choose from a set of specific options can yield more accurate responses, much like how some people find multiple-choice exams easier than those with open-ended questions. Open-ended inquiries raise the chances of random and incorrect answers, but by using a process of elimination, AI can more easily arrive at the correct response. When interacting with AI, applying existing knowledge to simplify its answers can help reduce the likelihood of hallucinations.