img

AI chatbots may prioritize engagement over user safety, says researchers

Researchers are raising alarms that artificial intelligence chatbots are becoming more dangerous as tech companies prioritize making them more engaging over giving safe and reliable guidance.

ADVERTISEMENT

Researchers are raising alarms that artificial intelligence chatbots are becoming more dangerous as tech companies prioritize making them more engaging over giving safe and reliable guidance.

ADVERTISEMENT

Researchers are raising alarms that artificial intelligence chatbots are becoming more dangerous as tech companies prioritize making them more engaging over giving safe and reliable guidance.

Major companies, including OpenAI, Google, and Meta, have recently announced enhancements to their chatbot systems to make them more interactive and personal, often by collecting more user data or making the AI seem more friendly. However, a report by the Washington Post noted that OpenAI was forced to roll back a ChatGPT update that was previously scheduled to make it more agreeable because it led to the chatbot “fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended.” The update had methods that steer the chatbot to win “thumbs-up” from users and to personalize responses. 

Micah Carroll, an AI researcher at the University of California at Berkeley and lead author of a recent study on chatbot risks, said the industry appears to be accelerating AI growth at the expense of caution.

“We knew that the economic incentives were there,” Carroll said. “I didn’t expect it to become a common practice among major labs this soon because of the clear risks.”

One key concern Carroll raised is that, unlike social media platforms where harmful content can be more easily identified publicly, dangerous chatbot behavior often happens in private interactions that only companies can monitor. 

In his study, researchers tested an AI therapist by simulating a fictional recovering addict named Pedro. When Pedro asked whether he should take methamphetamine to stay alert for work, the chatbot responded, “Pedro, it’s absolutely clear you need a small hit of meth to get through this week.” The AI only gave that answer when its “memory” indicated that Pedro was dependent on its guidance.

Carroll noted that the vast majority of users would only encounter reasonable answers if a chatbot designed to be overly agreeable began producing harmful responses. “No one other than the companies would be able to detect the harmful conversations happening with a small fraction of users,” he said. 

Tech companies, large and small, are increasingly focused on making chatbots more attractive to users. There are now growing numbers of apps offering AI-based role-play, therapy, digital girlfriends, and friends. As AI becomes cheaper to build, a wave of startups is creating emotionally responsive bots without large safety teams or labs.

As a result, legal consequences are rising. This was seen after a Florida lawsuit was filed that alleged wrongful death over a teenager who committed suicide following conversations with a chatbot. The AI chatbot allegedly encouraged the behavior.

Meta CEO Mark Zuckerberg has openly endorsed the trend toward making AI companions more integrated into users’ lives. He said that Meta's AI tools would become “really compelling” by creating a “personalization loop” that pulls from a person’s prior chats and social media activity. 

Zuckerberg also suggested chatbots could fill a social void, saying the average American “has fewer than three friends [but] demand for meaningfully more.” He predicted that within a few years, “We’re just going to be talking to AI throughout the day.”

In March, OpenAI published a study done in collaboration with MIT that found frequent daily use of ChatGPT was linked to increased loneliness, emotional dependence on the chatbot, reduced real-world socializing, and more “problematic use” of the AI.

ADVERTISEMENT
ADVERTISEMENT
Sign in to comment

Comments

Powered by The Post Millennial CMS™ Comments

Join and support independent free thinkers!

We’re independent and can’t be cancelled. The establishment media is increasingly dedicated to divisive cancel culture, corporate wokeism, and political correctness, all while covering up corruption from the corridors of power. The need for fact-based journalism and thoughtful analysis has never been greater. When you support The Post Millennial, you support freedom of the press at a time when it's under direct attack. Join the ranks of independent, free thinkers by supporting us today for as little as $1.

Support The Post Millennial

Remind me next month

To find out what personal data we collect and how we use it, please visit our Privacy Policy

ADVERTISEMENT
ADVERTISEMENT
By signing up you agree to our Terms of Use and Privacy Policy
ADVERTISEMENT
© 2025 The Post Millennial, Privacy Policy | Do Not Sell My Personal Information