Home > News > Feature: Into the loop – AI in mental health
Share

Into the loop: can AI rescue mental health services?

Thinly-staffed NHS mental health services are under intense pressure, with patients facing interminable waits for diagnosis and treatment. Could artificial intelligence and chatbots be part of the answer? What are the risks and what does this largely-unregulated new technology mean for patients and staff?  Craig Ryan investigates.

AI-generated image of robot therapist consulting notes

Mental healthcare is intrinsically human. We’re dealing with the workings of the human mind, with intuition and emotion, and how humans relate to other humans. Can artificial intelligence (AI) – machines that try to think and communicate like us – really help deliver this kind of care? Many experts think so, and NHS mental health services are among the first to use AI directly in patient-facing care.

We need to do something. Mental health services are under huge pressure, with spiralling waiting lists and a chronic shortage of all almost every kind of mental health professional. And everyone knows demand is only going to grow. “It’s bad enough already,” one NHS clinician told me, ”but we haven’t got a clue how to cope with what’s coming down the track.”

According to Dr Anna Moore, a child psychiatrist at Cambridge University Hospitals, a fifth of all children in the UK have a diagnosable mental health condition that would benefit from treatment, but 70% of those get no help at all. Waiting lists have doubled in the last five years and some children with lesser needs receive treatment while more serious cases are “missed” by the system.

“At the moment, we identify unsystematically and then refer everything to CAMHS [Child and Adolescent Mental Health Services],” says Moore. She is leading a major research programme looking at “whether we can streamline that process using AI… and create a preventative, early intervention pathway for children with mental health problems.”

Moore, who has secured £2.5 million in government funding from UK Research and Innovation, wants to combine the vast amounts of information routinely collected about children with research data to create AI tools that improve and speed up identification and diagnosis. Work is due to start this summer on large regional databases bringing together data from the NHS, schools, social care, housing and other services. The idea is for AI to support decision making by spotting things a human clinician may not – or may not spot quickly enough.

A multi-disciplinary team within an ICB could, for example, use the tool “to identify the kids the most at risk”, she says, and pull together what’s known about them – “are they already in the system, on a waiting list or have they not popped up at all?” – before alerting the next practitioner who sees them that “there’s something they need to look at”.

It’s an exciting prospect, but the work involved in identifying and accessing the data needed, building the AI models, developing use pathways and managing the many ethical issues is huge. Moore doesn’t expect a prototype to be ready for testing for three or four years.

“Incredibly contentious”

According to Dr Rishi Das-Gupta, chief executive of the NHS Health Innovation Network for South London, which works with industry partners to implement new healthcare technology, AI could help to relieve the pressure on mental health services by speeding up triage and diagnosis, offering new or supplementary forms of treatment, and by helping mental health professionals work more effectively. But many of these technologies are in their infancy, he warns, and many important ethical and regulatory issues have yet to be fully explored.

Offering AI tools to patients on waiting lists “might help to identify and triage who we should be seeing,” Das-Gupta says. Some may also benefit from AI therapy tools while they’re waiting, but that’s “incredibly contentious”, he warns. “We haven’t diagnosed those patients yet, we may not have even seen them, but we’d be offering them something. But is it better and safer to offer something rather than nothing?”

Many patients are already using AI-powered chatbots and apps to access NHS mental health services. Systems such as Limbic Access and Wysa, developed by cutting-edge AI firms in partnership with mental health practitioners, provide an “intelligent front door” for patients, replacing the often-unsatisfactory traditional routes via GP appointments, phone calls or daunting website forms.

These chatbots are nothing like the often frustrating chat functions that pop up on bank and utility company websites. Behind the conversational interface of Limbic Access, which is used by more than a third of local NHS Talking Therapy services, is a powerful clinical AI – what the firm’s chief executive Ross Harper calls “a clinical brain”. Trained “on hundreds of thousands of data points from a clinical environment”, it tries to understand what’s most likely to be the problem and decide where the chatbot should probe further, he explains.

When a human clinician picks up this information “there’s already been an intelligent analysis to help them make a high-quality clinical decision and identify the correct treatment pathway,” Harper says.

Taking pressure off the care pathway

Limbic has already been used by 270,000 NHS patients, and independent research found it reduced waiting times and therapy drop-out rates, and significantly increased access, especially for hard-to-reach groups. Most importantly, recovery rates more than doubled. Harper says the tool has already saved 50,000 hours of clinician time and cut recovery costs by 90%.

Far from feeling fobbed off with a second-rate service, many patients – especially young people – prefer accessing support through an app, says Ross O’Brien, a former NHS mental health commissioner who is now European managing director of software firm Wysa. Research among 6,000 UK youngsters found most would turn to their smartphones for mental health advice rather than have a “potentially embarrassing” conversation with a GP or mental health clinician. “We weren’t surprised by that,” says O’Brien. “But a majority said they would go to TikTok for support, and from a clinical perspective, that’s scary.”

Rather than fight against this, Wysa says it offers a safe, clinically-validated gateway that’s available 24/7, offering initial support as well as triage. “We wanted to build a single tool that could take the pressure off throughout the care pathway,” O’Brien says. “The beauty of AI is that it will guide you towards understanding your presenting problems in an interactive way. It keeps your attention but also validates that you’re going down the right path. It helps you towards the right support and information immediately.”

With six million users worldwide, Wysa is used by NHS adult services in Dorset and by CAMHS in Northamptonshire, among others, and functions as the sole gateway to mental health services in Singapore, where it recently caught the eye of shadow health secretary Wes Streeting. In a recent trial, Wysa was also distributed to Scottish schoolchildren, with impressive results: 82% accessed the app five times or more.

AI in action: Ambient Voice Technology

Rishi Das-Gupta, South London Health Innovation Network

Ambient Voice Technology (AVT) is an AI-driven technology being trialled in some NHS organisations in London. Enthusiasts hope it will reduce clinician burnout and speed up patient flow by taking over much of the admin work involved in clinical consultations. Like most successful innovations, it’s a blend of old and new technologies, combining cutting-edge AI with audio recording and speech recognition, which have been around for decades.

 

“It listens into the conversation, with the patient’s consent, and then processes all that information in a secure environment,” explains Rishi Das-Gupta, chief executive of the NHS Health Innovation Network for South London. “Then it can, for example, produce a clinical note, dictate a letter, book appointments or suggest medication.”

He stresses that human clinicians remain in charge. “What’s exciting for me is that it’s a supervised use of AI. It generates something for the clinician to check. It’s quite helpful to have something that’s 95% done,” he says.

 

Doctors in America are already using AVT, with the Microsoft-backed Dragon Ambient eXprience the first big-scale product out of the blocks. Other big tech firms like 3M are developing similar tools, while smaller ones like Ditate.it already offer healthcare-specific products.

 

Some hurdles still need to be overcome before AVT can be widely deployed in the NHS. There are fears that easier ordering could lead to more waste, that clinical notes could become longer or that clinicians may be tempted to talk more to the AI than the patient. Some doctors are also anxious about having their every word recorded in case patients fixate on something they didn’t mean or which wasn’t important.

If these can be ironed out, cutting down on paperwork isn’t the only potential benefit of AVT, Das-Gupta says.”The experience of the patient and the clinician is qualitatively different. In a simulation with GPs, not having to take notes also led to a much more natural patient-facing interaction.”

“AI will change our jobs, not replace them”

Mention AI in any context and the question “will it take our jobs?” inevitably follows. Limbic’s Ross Harper says conversations about AI “too often go down the route of substitution” and that tech companies, keen to bang the drum about savings, can be the worst offenders. “I think that’s ignorant. Our job is not to reach human levels of performance and then substitute for clinicians. It’s to reach the highest level of performance and then hand over to a human professional.”

Das-Gupta says “supervised” or “blended” AI tools – like the Ambient Voice Technology he is trialling with NHS providers in London (see opposite) – aim to “improve the reach of what we can do as clinicians and managers”, not replace them. “History shows that new technology often promises to be immensely labour-saving, but actually we use the time saved to do something more valuable. We should expect AI to change our jobs but not to replace them,” he explains.

The key ask for managers is “being thoughtful” and designing “valuable” jobs that don’t treat staff as “automatons”, he adds. With many routine tasks automated, those jobs could become more, not less demanding, as clinicians focus on more complex tasks that, at least for now, only humans can do.

Meri Beckwith is co-founder of Lindus Health, which develops AI tools to improve and speed up clinical trials. By drafting documents, protocols and patient-facing materials, AI can already shave “at least a month” off most clinical trials, he reckons, and the savings will only get more significant. “But it’s definitely not about replacing people, it’s more about helping managers and medics extract more signal from data, rather than being overwhelmed with all the data produced in a clinical trial,” he says.

Beckwith says innovations like Woebot, a AI-driven chatbot already approved by the US Food and Drug Administration for offering Cognitive Behavioural Therapy (CBT) to patients, “give us more resource to deal with mental health challenges and potentially free the time of clinicians to focus or more severe or complex cases”. While AI has already shown it can have a “significant impact” on broad, common conditions like anxiety and depression, it’s not yet proven safe for more high-risk conditions like schizophrenia, he adds. “But do I think it will get there? Yes, definitely, based on the huge progress we’ve seen in a short time”.

Digital turbulence

One threat to that progress is what a recent House of Lords report called “digital turbulence”. The association of AI with deep fake images, online abuse, electoral manipulation, espionage and fraud has led to widespread mistrust and even fear, while reports of ‘hallucinations’ by popular AI tools like ChatGPT and Google Gemini undermine faith in its effectiveness. The lack of a regulatory framework also feeds the perception of AI as a ‘wild West’ technology fraught with risk, especially in a high stakes environment like healthcare.

Like any healthcare innovation, building clinician and patient trust is the key to unlocking the potential of AI. Mistrust and nervousness “is not misplaced, but it needs to morph into scrutiny”, says Harper. That means only using clinically validated tools which have been shown to work elsewhere. He sees the clinical validation of Limbic, awarded Class II medical device status by the MHRA, as crucial to its future development.

While we need “guardrails” to ensure AI is used safely and responsibly, the risks shouldn’t blind us to the possibilities, says Rishi Das-Gupta: “If we were as risk averse in road technology as we are in healthcare AI we’d never have let cars on the roads in the city”. In developing a regulatory framework for AI, he sees driving laws as a possible model, with a graduated response for “careless AI”, “dangerous AI” and “high-consequence AI”. Investment and regulatory action could then be targeted where the risk is greatest – just as we put traffic lights at dangerous corners and invest more in safety features as cars get faster. Of course, his means that humans must be in control of how AI develops – something that might be easier to achieve in a heavily-regulated, safety conscious industry like healthcare than in other sectors like the media.

Far from being resistant to new technology, NHS organisations are “much more innovative than people think,” concludes Harper. “They’re mission-driven people who care about real value and impact. If you can show you understand their challenges and have ways to solve them, people are very willing to change their thinking and try something out.” However the tech develops, it’s those “mission-driven” humans – managers and clinicians working together – who will make AI work in mental health.

Find out more

If you’d like to read more from MiP, sign up to receive our free monthly emails – we’ll keep you up to date on news and events in health and care management

Related News