AI Hallucinations

Anyone trying to integrate Generative AI using Large Language Models into some commercial or professional business process should understand the dangers of so-called hallucinations. Because LLMs are generating the next text, given a corpus of training text and prompt data, all activity by these models is the same – prediction of the most highly likely completion text. Some outputs may appear to be hallucinatory only to those of us tethered to a real world. Anyone untethered – and this includes the LLMs themselves – will not be able to distinguish so-called real from so-called hallucinations, just as a sleeper cannot tell if his or her dreams are plausible or not.

Continue reading ‘AI Hallucinations’