The following blog is a summary of the research, based on two reviews by the Parliamentary Office for Science and Technology. Both reports and their research references can be found here:
- https://post.parliament.uk/research-briefings/post-pn-0738/
- https://researchbriefings.files.parliament.uk/documents/POST-PN-0737/
By Katie Burke, Kaelyn Dias, and Emma Palmer-Cooper
Artificial intelligence (AI) refers to any technology that allows machines to do tasks we would usually expect a human to do. Healthcare services are increasingly exploring ways that AI can be used in mental healthcare. This ranges from intelligent automated appointment booking and reminders, to providing wellbeing support through chatbots.
Ways AI can be used in Mental Health Care
Research so far has shown us that AI can be very useful in administrative tasks. For example, recent research has shown that clinical notes about appointments can be created using so-called ‘ambient voice technology’. This technology acts as a scribe by listening to clinical conversations, transcribing them, and then summarising the appointment into notes and formal letters. A large amount of administrative time in healthcare is taken up by manual note taking and letter writing, and ambient voice technology has been shown to reduced this by more than half for more appointments. As well as time saving, the AI-generated summaries and notes were rated as higher quality, more accurate and generally more efficient, and left more time for clinicians to engage with their patients.
AI can also support early detection of risk, initial assessment (triaging) for mental health treatment, predicting risks such as relapse, personalising treatment, and supporting patients between therapy sessions. In these settings, AI has the potential to reduce pressure on healthcare systems and provide more support for mental health services. AI could help move mental healthcare towards “precision psychiatry”, where care is tailored to individuals using data from symptoms, behaviour patterns, and even so called “digital biomarkers”, which refer to measurable information about our behaviour and biology that is collected using digital technologies, such as smartwatch readings of heart rate, movement, and even smartphone usage patterns.
AI tools and what they mean for mental health support
Regulated digital mental health interventions can be designed to fit into care pathways. Some tools, such as AI-assisted referral chatbots, are already being trialled across NHS trusts and have shown evidence of saving clinician time and improving initial assessment (triage) accuracy. Similarly, scientific reviews of studies have shown that conversational agents – specially designed chatbots that simulate human conversation – can reduce symptoms of depression and distress. These tools are typically designed and tested by researchers, scientists and clinicians. Some trials suggest that AI chatbots can reduce short-term symptoms of anxiety or depression and can help people engage with therapy while waiting for appointments. However, there is less information available about how long these improvements last, as people stop using them over time. Research also shows that AI cannot replace human therapeutic relationships.
Other AI-based support comes from Mental health chatbots and self-help apps that are sold directly to consumers. They are often less regulated and vary widely in quality and in who has developed them. Some people also use general-purpose chatbots (such as ChatGPT, CoPilot and Gemini) for mental health support. However these are not designed for mental healthcare and are not recommended for use in this way. While some people find them helpful for emotional support, there are concerns about privacy, misinformation, emotional dependence, where using these in a way they were not designed for, such as mental health support, can lead to unintended harm.
What worries people about AI in mental health care
Whilst people find different types of AI useful to support their mental health , there are concerns about the ‘correct’ or ethical ways in which AI can be used in mental healthcare, and what controls might be in place.
One serious concern is how AI might support, or prevent access to timely, affordable, and appropriate mental health care. Some people argue that AI can reduce wait times, reducing pressure on staff in the NHS. However, for people who are not confident when using digital technology, AI solutions may actually reduce access to health care, known as digital exclusion.
Another concern is that if AI is poorly designed, and not designed with specific groups of people in mine, the tools may be unhelpful in mental healthcare, or make a person’s symptoms worse. This is particularly relevant to people who are from under-represented or minoritised social and cultural groups. Many large-scale AI systems are trained on English-only Western-centric information. As the UK is home to people from diverse ethnic, geographical and cultural backgrounds, poorly designed AI may not represent their varied needs, and so provides limited support.
Privacy is another area of concern for both healthcare staff and people receiving mental healthcare. Whilst some people feel comfortable opening up to AI due to the anonymity it can provide, many have serious concerns about confidentiality and data protection for the wide variety of AI-based systems available, especially when the information is sensitive. If data protections and security are not appropriate, using such AI systems may expose people’s identity. While some regulations around data usage do exist for AI and mental health, there is a call for tighter restrictions, and clearer descriptions of how data is used, and who has access to it. While selling user data from wellbeing apps is not against the law, a lack of clear messaging outside of Terms and Conditions often leads people to assume the information they provide will be kept private, and are unaware they have given permission for this data sharing.
AI is also prone to bias. This means AI-based programmes can only respond based on the information they are trained on. If this data contains flawed opinions, clinical records, or outdated information, unreliable or unsafe responses may be provided, depending on the purpose of the system.
A key finding is that many people do not want to lose access to in-person mental health support. Whilst reduced waiting times are a clear benefit to people seeking support, there is a clear consensus that increased access should not come at the expense of human connection.
The future of AI in mental healthcare
AI-based tools have the potential to support current issues in mental health care, and will likely become increasingly influential in the future development of services. If the risks identified are thoroughly addressed through explicit rules and regulations, specialised AI training, and strict safeguarding measures, AI may help alleviate pressure on an overstretched system by enhancing support and assisting practitioners, and not replacing them.
Leave a comment