search icon
search icon
Flag Arrow Down
Română
Română
Magyar
Magyar
English
English
Français
Français
Deutsch
Deutsch
Italiano
Italiano
Español
Español
Русский
Русский
日本語
日本語
中国人
中国人

Change Language

arrow down
  • Română
    Română
  • Magyar
    Magyar
  • English
    English
  • Français
    Français
  • Deutsch
    Deutsch
  • Italiano
    Italiano
  • Español
    Español
  • Русский
    Русский
  • 日本語
    日本語
  • 中国人
    中国人
Sections
  • News
  • Exclusive
  • INSCOP Surveys
  • Podcast
  • Diaspora
  • Republic of Moldova
  • Politics
  • Economy
  • Current Affairs
  • International
  • Sport
  • Health
  • Education
  • IT&C knowledge
  • Arts & Lifestyle
  • Opinions
  • Elections 2025
  • Environment
About Us
Contact
Privacy policy
Terms and conditions
Quickly scroll through news digests and see how they are covered in different publications!
  • News
  • Exclusive
    • INSCOP Surveys
    • Podcast
    • Diaspora
    • Republic of Moldova
    • Politics
    • Economy
    • Current Affairs
    • International
    • Sport
    • Health
    • Education
    • IT&C knowledge
    • Arts & Lifestyle
    • Opinions
    • Elections 2025
    • Environment
  1. Home
  2. IT&C knowledge
68 new news items in the last 24 hours
9 September 06:31

OpenAI is investigating the causes for which advanced language models, such as GPT-5, cause hallucinations.

Gabriel Dumitrache
whatsapp
facebook
linkedin
x
copy-link copy-link
main event image
IT&C knowledge
Foto: Shutterstock
A new study from OpenAI explores the phenomenon of hallucinations in large language models (LLMs), such as GPT-5. Hallucinations are defined as plausible but false statements generated by these models. Researchers emphasize that the problem arises from the way LLMs are pre-trained, optimized to predict the next word without distinguishing between truth and falsehood. Current evaluations, which focus on accuracy, encourage models to guess instead of recognizing uncertainty. The study suggests that evaluation methods should evolve to penalize incorrect responses and reward the expression of doubt, in order to reduce hallucinations.

Sources

sursa imagine
Control F5
Are Incentives Driving AI Hallucinations?
app preview
Personalized news feed, AI-powered search, and notifications in a more interactive experience.
app preview app preview
OpenAi hallucinations artificial intelligence

Editor’s Recommendations

main event image
Exclusive
Yesterday 08:35
Original Content

IT News Review by Control F5 Software: AI chatbots can influence political opinions using inaccurate information

main event image Play button
Friday 13:30
Podcast

PODCAST: Dialogurile informat.ro - The Dilemma

main event image
Opinions
9 hours ago

Andrei Cornea: Elections and something else

app preview
Personalized news feed, AI-powered search, and notifications in a more interactive experience.
app preview
app store badge google play badge
  • News
  • Exclusive
  • INSCOP Surveys
  • Podcast
  • Diaspora
  • Republic of Moldova
  • Politics
  • Economy
  • Current Affairs
  • International
  • Sport
  • Health
  • Education
  • IT&C knowledge
  • Arts & Lifestyle
  • Opinions
  • Elections 2025
  • Environment
  • About Us
  • Contact
Privacy policy
Cookies Policy
Terms and conditions
Open source licenses
All rights reserved Strategic Media Team SRL

Technology in partnership with

anpc-sal anpc-sol