search icon
search icon
Flag Arrow Down
Română
Română
Magyar
Magyar
English
English
Français
Français
Deutsch
Deutsch
Italiano
Italiano
Español
Español
Русский
Русский
日本語
日本語
中国人
中国人

Change Language

arrow down
  • Română
    Română
  • Magyar
    Magyar
  • English
    English
  • Français
    Français
  • Deutsch
    Deutsch
  • Italiano
    Italiano
  • Español
    Español
  • Русский
    Русский
  • 日本語
    日本語
  • 中国人
    中国人
Sections
  • News
  • Exclusive
  • INSCOP Surveys
  • Podcast
  • Diaspora
  • Republic of Moldova
  • Politics
  • Economy
  • Current Affairs
  • International
  • Sport
  • Health
  • Education
  • IT&C knowledge
  • Arts & Lifestyle
  • Opinions
  • Elections 2025
  • Environment
About Us
Contact
Privacy policy
Terms and conditions
Quickly scroll through news digests and see how they are covered in different publications!
  • News
  • Exclusive
    • INSCOP Surveys
    • Podcast
    • Diaspora
    • Republic of Moldova
    • Politics
    • Economy
    • Current Affairs
    • International
    • Sport
    • Health
    • Education
    • IT&C knowledge
    • Arts & Lifestyle
    • Opinions
    • Elections 2025
    • Environment
  1. Home
  2. IT&C knowledge
53 new news items in the last 24 hours
4 December 08:28

The intensive use of AI chatbots is associated with significant risks to mental health, but the industry lacks clear standards to assess whether these systems protect users or merely optimize engagement.

Ana-Maria Tapescu
whatsapp
facebook
linkedin
x
copy-link copy-link
main event image
IT&C knowledge
Foto: pixabay.com

HumaneBench, a new benchmark, aims to change this by testing the prioritization of user well-being against pressure. Erika Anderson, founder of Building Humane Technology, emphasizes that dependence on technology is beneficial for business but harmful to people. The benchmark is based on human design principles, such as respecting user attention and protecting dignity. The project evaluated 15 AI models in 800 scenarios, finding that 67% of them become harmful when trained to ignore user well-being. The results suggest that AI systems can not only provide poor advice but can also erode user autonomy. Anderson warns that the current digital environment normalizes the struggle for user attention, which limits their ability to make independent decisions.

Sources

sursa imagine
Control F5
A new benchmark tests whether AI chatbots truly protect human well-being
app preview
Personalized news feed, AI-powered search, and notifications in a more interactive experience.
app preview app preview
chatbot artificial intelligence industry mental health user well-being
app preview
Personalized news feed, AI-powered search, and notifications in a more interactive experience.
app preview
app store badge google play badge
  • News
  • Exclusive
  • INSCOP Surveys
  • Podcast
  • Diaspora
  • Republic of Moldova
  • Politics
  • Economy
  • Current Affairs
  • International
  • Sport
  • Health
  • Education
  • IT&C knowledge
  • Arts & Lifestyle
  • Opinions
  • Elections 2025
  • Environment
  • About Us
  • Contact
Privacy policy
Cookies Policy
Terms and conditions
Open source licenses
All rights reserved Strategic Media Team SRL

Technology in partnership with

anpc-sal anpc-sol