Koko Mental Health App ChatGPT has been tested on its users

Illustration of a woman talking to a robotic processor

Clarification: ProStockStudio (stock struggle)

the AI chatbot ChatGPT can do a lot of things. Could Respond to tweetsAnd Write science fictionplan this reporter family christmasand he Even scheduled Work as a lawyer In court. But a robot can Provide safe and effective mental health support? A company called Coco decided to find out Using artificial intelligence to help formulate mental health support for nearly 4,000 of its users in October. Twitter users, not Koko, were unhappy with the results and the fact that the experiment took place at all.

“Honestly, that would be the future. We’re going to think we’re interacting with humans and not knowing if there’s an AI involved. How does that affect human-to-human communication? I have my own mental health challenges, so I really want to see that done right,” she said. Koko co-founder Rob Morris told Gizmodo in an interview.

Morris says that the crack was all a misunderstanding.

I shouldn’t have tried to discuss it on Twitter.”


Coco is a peer-to-peer mental health service that allows people to seek advice and the support from other users. In a brief experiment, the company allowed users to create automated responses using a “Koko Bot” — powered by OpenAI’s GPT-3 — that could then be edited, submitted, or dismissed. According to Morris, The 30,000 AI-powered messages sent during the test received a very positive response, but the company shut down the experiment after a few days because it “felt kind of sterile”.

“When you interact with GPT-3, you can start to recognize some of the stories. It’s all really well written, but it’s kind of formulaic, and you can read it and recognize that it’s just a bot and there’s no nuance between humans,” Morris told Gizmodo. “There is something about authenticity that gets lost when you have this tool as a support tool to help with your writing, especially in this kind of context. On our platform, messages felt better in a way when they felt more human-written.”

Morris posted a thread on Twitter about the test implicit Users did not understand that artificial intelligence was involved in their care. he He tweeted that “Once people learned that messages were co-generated by a machine, it just didn’t work.” The tweet created an uproar on Twitter about Coco’s ethics Research.

“Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own,” Morris tweeted. “Response times are down 50% to less than a minute.”

Morris said that these words caused a misunderstanding: “people” in this context are himself and his team, not accidental users. Koko users knew the messages were co-written by a bot, he said, and weren’t talking directly to the AI.

“It was explained during the boarding process,” Morris said. He added that when Amnesty International got involved, the responses included a disclaimer that the message was “written in collaboration with Koko Bot”..

However, the trial raises ethical questions, including doubts about how well Koko communicates to users, and the risks of testing an unproven technology in a live healthcare setting, even in a peer-to-peer environment.

In academic or medical contexts, it is illegal to perform scientific or medical experiments on humans without their informed consent, which includes providing test subjects with extensive details about the potential harms and benefits of participating. The Food and Drug Administration requires doctors and scientists to run studies through an Institutional Review Board (IRB) whose purpose is to ensure safety before any tests begin.

But the explosion of online mental health services provided by private companies has created a legal and ethical gray area. In a private company that provides mental health support outside of a formal medical setting, you can basically do whatever you want for your clients. Koko’s experiment did not need or receive IRB approval.

“From an ethical perspective, any time you use technology outside of what would be considered standard of care, you want to be very careful and overly revealing about what you’re doing,” said John Taurus, MD, director of digital psychiatry at Beth Israel Deaconess Medical Center. in Boston. “People who seek mental health support are at a disadvantage, especially when they seek emergency services or peer services. They are a population we don’t want to skimp on protection.”

Peer mental health support can be very effective when people undergo the appropriate training, Toros said. Torous said systems like Koko are taking a new approach to mental health care that could have real benefits, but users don’t get that training, and these services are basically untested. aAnd once the AI ​​gets involved, the problems are magnified even further.

“When you talk to ChatGPT, they tell you ‘please don’t use this for medical advice.'” It has not been tested for healthcare uses and clearly could give inappropriate or ineffective advice.

The rules and regulations related to academic research do not only guarantee safety. It also sets standards for data sharing and communication, allowing experiments to build upon each other, creating a growing body of knowledge. In the digital mental health industry, Toros said, these standards are often ignored. Failed experiments tend not to be published, and companies can be careful about their research. It’s a shame, Toros said, because many of the interventions run by mental health app companies can be beneficial.

Morris acknowledged that working outside the IRB’s formal pilot review process involves a trade-off. “Whether this kind of work, outside of academia, should go through IRB operations is an important question and I shouldn’t be trying to discuss it on Twitter,” Morris said. “This should be a broader discussion within the industry and we want to be a part of it.”

Morris said the controversy is ironic because he said He got into Twitter in the first place because he wanted to be as transparent as possible. “We were really trying to be ready for the technology and expose it in order to help people think through it,” he said. He said.

Leave a Comment