OpenAI released the first study on how ChatGPT impacts people’s mental wellbeing.

OpenAI released the first study on how ChatGPT impacts people’s mental wellbeing.
Photo Credit: Pexels
  • Homepage
  • >
  • All News
  • >
  • OpenAI released the first study on how ChatGPT impacts people’s mental wellbeing.

OpenAI claims that over 400,000,000 people are using ChatGPT each week. How does it make us feel? Is it making us feel more or less alone? OpenAI, working in collaboration with the MIT Media Lab on two new studies, set out to answer these questions.

The researchers found that only some users are emotionally engaged with ChatGPT. Kate Devlin a professor at King’s College London who is not involved in the project, said that this is not surprising, given ChatGPT’s marketing as an AI-companion app, like Replika and Character.AI. She says that ChatGPT was designed as a tool for productivity. We know people use ChatGPT as a companion application, but we don’t really care.

Devlin: “The authors have been very open about the limitations of their studies, and it is exciting that they’ve taken this step.” It’s incredible to have this much data available.

Researchers found that men and women responded differently to ChatGPT. The researchers found that after using ChatGPT for four weeks female participants of the study were less likely than males to interact with others. Participants who used ChatGPT voice mode to interact with a chatbot in another gender reported higher levels of emotional dependence on the bot at the conclusion of the study. OpenAI intends to submit the two studies to peer reviewed journals.

It’s hard to understand how chatbots that use large language models affect our emotions. Many existing studies in this area, including some new research by OpenAI and MIT, rely on self-reported information that may not be reliable or accurate. This latest research is in line with the findings of scientists about how engaging chatbots can be. In 2023, MIT Media Lab found that the emotion of a message is mirrored by chatbots. This suggests a feedback loop in which the more happy you are, the happier AI appears, and vice versa.

OpenAI and MIT Media Lab employed a dual-pronged approach. They first collected and analysed real-world data collected from over 40 million ChatGPT interactions. They then asked 4,076 of the users that had experienced those interactions what they felt. The Media Lab then recruited nearly 1,000 participants to participate in a 4-week trial. The Media Lab then recruited almost 1,000 people to take part in a four-week trial. This study was deeper, and examined how the participants interacted daily with ChatGPT. Participants completed a survey at the end of their experiment to assess their impressions of ChatGPT, subjective feelings of lonely, levels of social interaction, emotional dependency on the bot and whether they felt their usage of it was problematic. The researchers found that those who “bonded” and trusted ChatGPT were more likely to feel lonely and rely more on the bot.

Jason Phang is a safety researcher at OpenAI who was involved in the project. He says that this work represents an important step towards greater understanding of ChatGPT and its impact on humans, which can help AI platforms create safer and more healthy interactions.

He says that “a lot of the work we do here is preliminary. But we are trying to begin the discussion with the field on the types of things we can measure and start thinking about how long-term the impact will be for users.”

Devlin says that while the study is important, it is still hard to determine when humans are engaging with technology emotionally. Devlin says that the participants in the study may have experienced emotions not recorded by researchers.

The teams didn’t necessarily measure people using ChatGPT emotionally, but it is impossible to divorce human interaction from the way they use ChatGPT. [with technology]”She says. We use the emotion classifiers we’ve created to search for specific things, but it is hard to determine what they mean in a person’s daily life.

In an earlier version, it was incorrectly stated that OpenAI does not intend to publish any study. OpenAI intends to publish both studies. Study participants assigned voice mode gender. Since then, the article has been updated.

View Article Source

Share Article
Facebook
LinkedIn
X
What is in a Name? Moderna's "vaccine vs. therapy" debate
Gmail now offers end-toend encryption on Android and iPhone.
Middle Eastern franchises: A new import.