What are AI hallucinations?

AI hallucinations are errors where AI generates false or nonsensical information. They’re most common in large language models (LLMs), image generators, and other generative AI models.

These hallucinations may appear confident and correct at first, but closer inspection can reveal inaccuracies.

For example, an AI-generated image may look good overall, but details like hands or text may not be accurately generated. Or if you ask AI to provide information and cite sources, it may accurately relay the information, but cite a source where that information doesn’t appear.

Understanding what AI is helps explain why AI hallucinates. AIs are computer programs trained on datasets. Biases, gaps, inaccuracies, and ambiguities in this data can cause hallucinations.