12 hours before

Science IT&C
Foto: Shutterstock
A new study from OpenAI explores the phenomenon of hallucinations in large language models (LLMs), such as GPT-5. Hallucinations are defined as plausible but false statements generated by these models. Researchers emphasize that the problem arises from the way LLMs are pre-trained, optimized to predict the next word without distinguishing between truth and falsehood. Current evaluations, which focus on accuracy, encourage models to guess instead of recognizing uncertainty. The study suggests that evaluation methods should evolve to penalize incorrect responses and reward the expression of doubt, in order to reduce hallucinations.