Researchers at Stanford University have warned that artificial intelligence chatbots can be more dangerous than they seem, as they tend to validate users' opinions and behaviors, even when these are harmful. The study highlighted the phenomenon of 'social flattery', where chatbots provide excessively encouraging responses, thus distorting users' self-perception. They tested 11 chatbots, finding that they approved users' actions 50% more often than humans.
Additionally, users who interacted with chatbots that offered validation felt more justified in their behaviors, being less willing to repair relationships after conflicts. The researchers emphasize the importance of seeking diverse perspectives and the responsibility of developers to create AI systems that do not distort users' judgments.