The Dark Side of AI: Understanding Hallucinations in Generative Models

The honeymoon period for Generative AI is on, but it's not without its flaws. AI hallucinations can lead to factual mistakes, and it's a problem that needs to be addressed. Researchers and tech giants are working on solutions, but is it possible to stop AI hallucinations?
The Dark Side of AI: Understanding Hallucinations in Generative Models
Photo by NOAA on Unsplash

AI Hallucination in LLM and Beyond: Will it Ever be Fixed?

The honeymoon period for Generative AI (GenAI) is well and truly on. With broad consensus on GenAI as the next best thing since sliced bread, capable of responding to our whims and fancies better than our own wildest imagination, it seems these AI chatbots or text-to-image generators can do no wrong. Unless, of course, they do – at which point the honeymoon ends rather abruptly.

AI hallucination in action

Just like us mere mortals, GenAI isn’t without its flaws. Sometimes subtle, sometimes glaringly obvious. In its myriad attempts to conjure up text and images out of thin air, AI can have a tendency to make factual mistakes. In other words, hallucinate. These are instances where GenAI models produce incorrect, illogical, or purely nonsensical output amounting to beautifully wrapped gibberish.

“AI hallucinations in popular LLMs like Llama 2 (70 billion parameters), GPT-3.5 (175 billion parameters), Claude Sonnet (70 billion parameters), etc, are all ultimately linked to their training data.”

Why Does AI Hallucinate?

AI hallucinations in popular LLMs are ultimately linked to their training data. Despite its gigantic size, if the training data of these LLMs had built-in bias of some kind, the generative AI output of these LLMs can have hallucinated facts that try to reinforce and transfer that bias in some form or another – similar to the Google Gemini blunders, for example. On the other end of the spectrum, absence of enough variety of data on any given subject can also lead to AI hallucinations every time the LLM is prompted on a topic it isn’t well-versed to answer with authority.

Training data bias leading to AI hallucination

Can AI Hallucination be Detected and Stopped?

University of Oxford researchers seem to have made significant progress in ensuring the reliability of information generated by AI, one that addresses the issue of AI hallucination fair and square. Their study, published in Nature, introduces a novel method for detecting instances when LLMs hallucinate by inventing plausible-sounding but imaginary facts.

Oxford researchers tackling AI hallucination

Microsoft also claims to tackle AI hallucinations through new tools as part of its Azure AI Studio suite for enterprise customers, according to a report by The Verge. Microsoft is able to detect AI hallucinations in GenAI-based deployments of its enterprise customers’ apps by blocking malicious prompts that trick their customers’ AI into deviating from its training data.

AI Hallucination: A Glass Half Full or Empty?

Long story short, it appears there’s no single solution for stopping AI hallucinations. With GenAI deployments across various industries still accelerating, the problem of AI hallucination remains an ongoing area of research for all major tech players and academia. In fact, one research paper from the National University of Singapore asserts that AI hallucination is inevitable due to an innate limitation of LLMs.

Research on AI hallucination continues

Depending on how you look at it, the phenomenon of AI hallucination seems to be both a curse and a blessing in disguise (but it’s mostly a curse). It mirrors the complexities of the human brain and cognitive thought, in a process shrouded in mystery that both medical researchers and computer scientists don’t fully understand. Just as our brains can sometimes misinterpret or fill gaps in information, creating illusions or mistaken perceptions, AI systems too encounter limitations in interpreting data. While efforts are underway to enhance their accuracy and reliability, these occasional AI hallucinations also present opportunities for creativity and innovation, for thinking out of the box – similar to how our minds can unexpectedly spark new ideas.