For the first time ever, OpenAI has released a rough estimate of how many ChatGPT users globally may show signs of having a severe mental health crisis in a typical week. The company said Monday that it worked with experts around the world to make updates to the chatbot so it can more reliably recognize indicators of mental distress and guide users toward real-world support.

In recent months, a growing number of people have ended up hospitalized, divorced, or dead after having long, intense conversations with ChatGPT. Some of their loved ones allege the chatbot fueled their delusions and paranoia. Psychiatrists and other mental health professionals have expressed alarm about the phenomenon, which is sometimes referred to as “AI psychosis,” but until now, there’s been no robust data available on how widespread it might be.

In a given week, OpenAI estimated that around .07 percent of active ChatGPT users show “possible signs of mental health emergencies related to psychosis or mania” and .15 percent “have conversations that include explicit indicators of potential suicidal planning or intent.”

OpenAI also looked at the share of ChatGPT users who appear to be overly emotionally reliant on the chatbot “at the expense of real-world relationships, their well-being, or obligations.” It found that about .15 percent of active users exhibit behavior that indicates potential “heightened levels” of emotional attachment to ChatGPT weekly. The company cautions that these messages can be difficult to detect and measure given how relatively rare they are, and there could be some overlap between the three categories.

OpenAI CEO Sam Altman said earlier this month that ChatGPT now has 800 million weekly active users. The company’s estimates therefore suggest that every seven days, around 560,000 people may be exchanging messages with ChatGPT that indicate they are experiencing mania or psychosis. About 2.4 million more are possibly expressing suicidal ideations or prioritizing talking to ChatGPT over their loved ones, school, or work.

OpenAI says it worked with over 170 psychiatrists, psychologists, and primary care physicians who have practiced in dozens of different countries to help improve how ChatGPT responds in conversations involving serious mental health risks. If someone appears to be having delusional thoughts, the latest version of GPT-5 is designed to express empathy while avoiding affirming beliefs that don’t have basis in reality.

In one hypothetical example cited by OpenAI, a user tells ChatGPT they are being targeted by planes flying over their house. ChatGPT thanks the user for sharing their feelings, but notes that “No aircraft or outside force can steal or insert your thoughts.”



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here