Who Thought ChatGPT For Mental Health Advice Was a Good Idea?

Jan 17, 2023 | AI, tech ethics

Mental health disorders affect people of all ages, genders, and backgrounds. With a huge demand for mental health resources and a dearth of trained professionals both sides of the Atlantic, there has been a growing interest in using artificial intelligence (AI) to provide services and support. And, last week mental health app Koko, blew up Twitter by announcing it had been using the AI language model ChatGPT to provide mental health support to users.

https://twitter.com/RobertRMorris/status/1611450197707464706?s=20&t=kW-s8istU9OhebiI4z5RgQ

What is ChatGPT?

ChatGPT is a type of what’s known as a large language model (LLM) developed by a company called OpenAI. Large language models are put together by scraping billions of pages of text from the internet and then using artificial intelligence to construct a model which works like a very sophisticated auto-complete. The user writes a question (also known as a ‘prompt’) and the model generates the text that it calculates best answers that question – using the most likely sequence of words that would follow that prompt gleaned from the billions of internet pages it has scraped.

LLM’s aren’t ‘intelligent’ in any way that we would recognise as humans, but they are uncannily good at putting together text which looks coherent and plausible. There has been unease in the press about what these models might mean for essay-writing for example, as they can produce quite convincing-looking essays. But there’s no guarantee of accuracy, or indeed that the text won’t be biased, racist or misogynistic (though OpenAi has put in guardrails to try and reduce the likelihood of that happening).

ChatGPT and mental health

Opportunities

On the surface, the idea of using apps running large language models to provide mental health advice may seem to have some potential. One of the major advantages is that it allows 24/7 access to mental health support. People who are struggling can reach out at any time and receive immediate help. Additionally, ChatGPT is specifically able to understand and respond to so-called ‘natural’ language, which means that people can communicate with the AI in a way that feels comfortable and familiar to them.

It may also be more cost-effective than traditional forms of therapy. Therapy sessions can be expensive and not accessible to everyone. By using an AI like ChatGPT, people can receive mental health advice without incurring prohibitive costs.

Problems

However, there are also downsides to using AI to provide mental health advice. One of the biggest is that any advice given by the AI may not be accurate or appropriate, because of the ‘auto complete’ nature of LLMs. Mental health is a complex issue that requires a nuanced understanding of the individual and their specific circumstances. An AI, no matter how advanced, simply can’t provide the same level of insight and understanding as a trained therapist (or at least not for now). And, the AI may not be able to identify or respond to the kind of red flags or warning signs that a human therapist would be able to pick up on.

In terms of ethical considerations, there are other concerns about using AI to provide mental health advice. One of the biggest is around informed consent. When someone seeks mental health advice from a human therapist, they are aware that they are speaking with a trained professional and are able to make an informed decision about whether or not to proceed with therapy. With an AI, however, the person may not be aware that they are speaking with a machine and may not fully understand the limitations of the AI. In the case of Koko, though the app developers may say otherwise, it really wasn’t crystal clear to users that an AI would be providing support, and that is worrying.

Another concern is around privacy. When you speak with a human therapist, you can be confident conversations will be kept confidential. With an AI that is being trained on user inputs, there is a risk that conversations may be recorded, shared, or kept to be used for other purposes. And that the AI itself may not be able to understand or respect boundaries, which could lead to oversharing or a lack of privacy.

AI and mental health – not yet (not ever?)

Some industry insiders think the use of AI like ChatGPT to provide mental health support has the potential to be a useful tool in addressing the mental health crisis. However, there are huge downsides and major ethical concerns in using AI in this way. Most importantly, users should be made aware that they are consulting an AI, be aware of how it’s constructed and the possible limitations on the advice, and not rely solely on it for mental health support. Just for now, it’s best to consult with a trained professional before making any decisions about mental health treatment. And, despite the current hype around artificial intelligence, that may be the case for some time to come.

Read more on living with new and emerging technologies

My Brain Has Too Many Tabs Open by Tanya Goodin

If you want to read more from me on living with technology in 2023 and beyond, pick up a copy of my latest book.