As we bear witness to the rise of the digital age, it almost seems like there’s nothing AI cannot do. Its ability to synthesise immense amounts of information in mere seconds, at any time and in many cases, at no additional cost, paints it as the panacea for all the world’s ills. But can a machine really replicate human connection? Let’s find out. The rapid, global escalation in mental health challenges is mirrored in the increased demand for psychological and psychosocial care services. Ideally, increased demand would result in increased supply and thus the problem would be solved. Unfortunately, our reality is far from ideal.
In many parts of the world, the development, upkeep and improvement of mental health infrastructure is not considered a governmental priority, resulting in severe underfunding. As a result, services are often substandard, outdated and under-resourced. A shortage of trained mental health professionals, coupled with uneven geographical distribution (much scarcer concentration in rural vs urban areas) are added barriers to accessing care. Further deterrents included but are not limited to poverty, stigma, discrimination and low mental health literacy. It would come as no surprise then, that the myriad of seemingly omniscient AI chatbots available today appear to many as the perfect solution. I’d be lying, however, if I said I wasn’t hesitant. Now don’t get me wrong- AI is the future, and to deny that is to be left behind. Nevertheless, nothing is without its limitations- creepy/cool robot friends included!
Q So, what potential do AI models have in the field of mental health care?
While they are by no means a replacement for human therapists, there are some ways in which they can be used to supplement the therapeutic process. For example, using AI chatbots to conduct initial screenings can streamline the triage process, allowing individuals to access the appropriate human care for them in a faster and smoother manner. Clinicians and clinic coordinators may also find AI useful in optimising administrative tasks. For example, these models can be used to handle the scheduling of appointments and fee payment- and where client consent has been obtained, AI may even be used to facilitate note taking in therapy sessions. The enhanced pattern recognition abilities demonstrated by these models may aid clinicians in individualising the treatment plans for and monitoring the progress of their clients- however, they must remain aware of potential flaws and biases in AI algorithms and always prioritise their own clinical judgement. With regards to clinical training, AI chatbots are able to simulate clients, providing a risk-free environment in which future practitioners can attempt to take on difficult conversations. While of course, this does not replace or eliminate the need for human-led, hands-on training, it goes without saying that one can never practice too much!
Q Right, so… what’s the catch?
Given that AI models are always learning, it stands to reason that they can be trained in different therapeutic modalities, and to an extent even be able to mimic the manner in which therapists interact with their clients. Right? Well…kind of. If we are being extremely literal, then, yes- AI chatbots can be trained in structured therapeutic modalities and guide users to challenge unhelpful thinking patterns or practice regulation tactics. They can provide users with helpful information regarding psychological difficulties and treatment options and even allow them to practice interpersonal skills in a controlled environment.
However, none of this is risk free- and when dealing with vulnerable individuals, taking a gamble is not recommended. It is important to keep in mind that chatbots lack the capacity to feel genuine empathy or truly understand the nuances of human experience- both of which factor into building a strong therapeutic alliance. This alliance refers to the bond between the practitioner and their client, which is built on shared understanding and is the strongest indicator of successful therapeutic outcomes. Additionally, AI models are never truly objective. Any biases in training data will be reflected in output, and thus these chatbots run the risk of providing information or advice that is not generalisable to all people, potentially causing more harm than good. AI is especially unsafe to use in crisis situations involving risk of suicide or injury to the self or others. The danger associated with missing any signs or giving incorrect advice in these situations is extremely high, and it is yet unclear where accountability lies and what due process is if a tragedy were to occur.
There are countless mental health apps available today that have a chatbot feature. Unfortunately, many of these apps do not abide by regulatory frameworks, nor do they have clinical trials proving their efficacy. As a result, no one can say with confidence that these chatbots are 100% safe to use. What’s more, it is impossible to be absolutely certain that any data fed into an AI model is stored privately and securely. As our mental health data is highly confidential and personal, risking it being improperly stored or used is definitely not the best option!
Q Soooo, where do we go from here?
Being a therapist myself, grappling with the idea that years of education and training would be out the window in favour of the robot overlords taking over was, frankly, bizarre. But dramatics aside, I saw a quote recently that read, ‘AI will not replace you, a person using AI will’. The whole point is that AI is the Copilot (haha) and not the captain- we should be using it to optimize and streamline our systems and processes, but not to do our jobs for us. So, in terms of mental health care, here’s how I think that plays out: AI can handle the data driven tasks such as triaging, skills training and admin.
It can also provide the first line of support for non-emergency situations- especially in underserved communities. In the meanwhile, humans will continue to handle the complex emotional work such as building rapport, understanding nuanced experiences and managing crises. AI isn’t the enemy, nor is it incapable of error. What it is, is a resource that when used cautiously and ethically, can help the field of clinical psychology move towards more precise, preventative and personalised care.