Article Digital Health
27 March 2024

Digital products, AI & your mental health – The rise of digital treatment path

Our Managing Director for Studios, David Low takes a look at how digital devices and health applications are becoming an ever-larger part of the therapeutic journey for mediating our mental health challenges.

Don’t merely take my word for it…

But it seems many patients prefer chatbots to doctors! At least according to JAMA, from an article published recently comparing patients’ interaction with a chatbot versus those with a healthcare professional.

And why not? The original ‘chatterbot’,’ developed in the 1960s, provided a medical function called Doctor. Using the Rogerian psychoanalysis technique, which is essentially to respond to everything as a question, akin to talking to a five-year-old:

  • ‘I am feeling sad’
  • ‘Why are you feeling sad?’
  • ‘Because I’m lonely’
  • ‘Why are you lonely?’

This was enough to convince the user, in this case, the programmer’s secretary, that they were having an intimate conversation about their lives. Listening and responding with an encouragement for more contribution, seems to be enough to create a sense of care and value.

So yes… it’s very easy to think you’re in a genuine conversation with a chatbot, and one can understand why a reasonably benign conversation with a bot might be preferable to one with a doctor, with whom you may feel lacks empathy, sympathy or potentially requires the bridging of a class divide.

So, can and should these software based assistants be a tool to combat mental health issues?

Well, AI companions have evolved significantly from their inception as simple chatbots to complex digital entities that provide support, companionship, and personalised experiences. Initially, these companions were built on rule-based systems, offering limited interaction. Incorporating machine learning and natural language processing has since transformed AI companions into more sophisticated, interactive tools that learn and improve over time.

Let’s find out more.

In mental health, AI companions like Woebot and Replika stand out; Woebot for innovative use of cognitive-behavioural techniques and Replika for empathetic conversations. Replika divides opinions as to whether this is a genuine mental health tool. In a society where we have become increasingly isolated, it may be the case that a digital companion is a preventative measure for offsetting conditions such as depression.

These platforms demonstrate AI’s capability to extend support beyond traditional therapy, offering accessible and immediate assistance to individuals struggling with mental health issues.

As AI companions become more integrated into daily life, ethical considerations around privacy, data security, and emotional dependency emerge. Developing deep emotional bonds with AI poses questions about dependency and the psychological impact of such relationships. Ensuring that AI companions supplement rather than replace human interaction is crucial.

The clinical use of LLMs in chatbots represents a significant advancement in mental health care. Companies like Woebot are pioneering this approach, conducting IRB-approved studies to explore the effectiveness of LLM-augmented features in delivering safe and engaging digital mental health solutions. These studies aim to understand how LLMs can be applied to enhance therapeutic delivery and identify use cases ready for scaling.

However, the integration of LLMs in mental health chatbots presents challenges. Privacy and data security are paramount, as users share sensitive information with these platforms. Ensuring the ethical use of data and protecting user information against misuse is a continuous concern.

Secondly, given the UK Mental Health Foundation predicts that 20% of adolescents may experience a mental health problem in any given year, and 50% of mental health issues are established by age 14, interaction with services will likely take place on mobile devices. Ensuring security trust and the simple fact that rushed or ill-considered input might skew any clinical guidance needs careful design and technical planning.

However, the potential benefits of users, particularly in the younger generations, using a platform that feels most familiar to them could unlock more clinical insight than possible through traditional text and voice conversations. The ability to share images or GIFs to express feelings, something modern LLMs should have no trouble parsing, adds another dimension to potential input.

Additionally, addressing potential biases in AI algorithms and maintaining a balance between technological advancement and the value of human empathy are ongoing challenges. All the data will have bias; we must recognise this fact.

Companies like Woebot are actively working to mitigate these challenges by developing safety features and conducting rigorous testing. Their approach includes using LLMs to interpret user input and route to human-written content, ensuring clinical appropriateness and safety. The future of AI companions in mental health looks promising, with continued innovation and responsible development poised to enrich human well-being and enhance the accessibility of mental health support.

health apps

Woebot’s approach to integrating Generative AI into mental health support is innovative and conscientious. Alison Darcy, PhD, Founder and President of Woebot Health, highlights the company’s commitment to advancing the field responsibly:

“I’m delighted to say we have registered the first known IRB-approved study of its kind on clinicaltrials.gov – a trial that seeks to explore how a reduced set of LLM-augmented Woebot features compares with the same features in Woebot as it is today”​​.

This statement underscores Woebot’s dedication to scientifically validating the efficacy and safety of LLM-augmented features in mental health support.

Further, Woebot Health emphasises the importance of safeguarding user privacy and ensuring data security. The company has implemented several safety features to protect users:

“Users never interact directly with LLMs… Our proprietary Concerning Language Detection algorithm runs on user input before it is passed to an LLM. We use a proprietary prompt architecture designed to prevent prompt injection attacks… other guardrails such as off-topic identification, maximum turn enforcement, and output validation keep our interactions with the models targeted and succinct”​​.

These measures demonstrate Woebot’s proactive stance on privacy and safety, ensuring that interactions are secure and clinically appropriate.

By conducting rigorous research and implementing robust safety protocols, Woebot Health exemplifies a responsible approach to employing LLMs in mental health care. Their work serves as a model for combining cutting-edge technology with ethical considerations to deliver safe, effective, and engaging digital mental health solutions.

It’s interesting – and worthy of a separate article – that LLMs are at the centre of making these trials more efficient. Generative AI-driven processes transform the clinical trial landscape, offering novel solutions to long-standing challenges. By processing vast amounts of text data and generating coherent text sequences, these models can automate intricate tasks through fine-tuning and human feedback, demonstrating a high impact on fields that require extensive training and are highly paid.

A paper published last year explored the advantages and pitfalls of LLM dependency whilst offering an optimistic view of their potential.

Clinical trials, the cornerstone of evidence-based medicine, face numerous challenges, including patient-trial matching and planning. LLMs are emerging as a transformative force in this domain by enhancing patient-trial matching, streamlining clinical trial planning, analysing free-text narratives for coding and classification, assisting in technical writing tasks, and providing informed consent via specific LLM-powered chatbots.

Going full circle, the careful use of LLMs to validate clinical test subjects may be one critical step to proving the validity of LLM-driven services in the first place. Only with those valid clinical trials will we discover the true power of LLMs as a clinical tool that augments or replaces current techniques.

The chatbot may be a valuable tool in tackling one of the most significant issues large swathes of our population face. While we may have ethical considerations, and we do, as long as the technology can be validated to operate as it should, the real issue is making sure the actual ‘therapy’ is up to scratch.

This is not a one-size-fits-all all. Mental health has been addressed by extensive sweeping tools, like meditation apps, for too long; there is a need to differentiate between conditions and cultural idiosyncrasies to get the right tool to the right patient. Technology is the least of our worries in this case. In future articles we will explore the ethics, sleight-of-hand, and user experience elements often debated in this industry and the risk that digital solutions are merely a sticking plaster for human therapy.

Our Digital Health expertise

Mobile, web, AI and sensors.

Find out more today!
Share this article

Authors

David Low
David Low
Managing Director - Studios

Related