A quick glance at any media source at the moment, and you won’t have to look far before you come across an article about artificial intelligence.
ChaptGPT seems to be leading the charge in this media furore, with Google’s Bard following in its slipstream. But machine learning and AI have been making waves in the world of healthcare for a much longer time than chat interfaces have been in the public consciousness.
Whether it is in predicting the 3D structure of a protein from a 1D amino acid sequence or creating an AI that can identify skin cancer with higher accuracy than human doctors.
Digital products underpinned by enormous datasets and highly-sophisticated algorithmic learning models are creating a healthcare future that just 5 years ago, no one would have thought possible.
However, returning for a moment to ChatGPT, Bard and other chat interfaces… How do we ensure that the large language models that underpin these interfaces start to address the wider challenges that need to be addressed to deliver scalable access to health systems and personalised/contextualised care?
Could we use a large language model AI trained to help break down some of the barriers we experience when dealing day-to-day with our health & wellness?
There’s a good chance… Let’s find out why.
Let’s start this thought experiment by focusing on a particular therapeutic area, such as MCI (mild cognitive impartment), which can, in many cases, be the starting point of Dementia and Alzheimer’s.
The prevalence of MCI increases with age, and as such can go unnoticed, untreated and underserved until the point at which the patient has severe enough cognitive impartment to be identified by a healthcare professional. The spectrum of care that an individual will receive as their condition deteriorates throughout old age can be varied and complex.
But could the combination of AI and wearable technology lead to early diagnosis? Early detection? Or Early signs of deterioration? And provide early access to therapy, drugs and support that will allow for a better quality of life for longer?
As a society, as our general health improves and we start to live longer.
And as that happens, we need to think of smarter ways to manage an ageing population. We will need to understand how to support them with changes in their brain function and physical capacity, but also ensure that we allow them to live independent lives as long as possible.
It is estimated that 10 to 20% of people age 65 or older with MCI will develop dementia over a one-year period. However, not everyone who has MCI does develop dementia.. and many of the initial signs of MCI are ordinary aspects of cognitive impairment that anyone might expect to experience in old age, such as forgetting things more often or losing your train of thought. So whilst these instances aren’t without worry, they reside in a different category to those who will deteriorate faster and to worse outcomes.
Studies have shown the physiological barriers around brain impartment in old age are huge, but arguably it is emotional and personal issues that can be the most difficult to deal with.
Embarrassment, shame, fear and other worries around losing one’s independence, are terrifying for adults to face. In many instances, that may also mean losing control of your finances, your ability to be creative, and your ability to have meaningful conversations and engage in hobbies.
As these initial signs of MCI appear, the challenges are:
Is this potentially the point at which conversational AI and personalised health care can help?
If the barriers to help and support are ones of embarrassment, frustration and shame. Then maybe the cloud-based large language models paired with the ambient computing power that is available all around us alongside the devices we carry, wear and use in our homes can help.
Giving older people access to high-quality information, immediately… whilst providing reminders, notifications and next steps to additional support.
Remembering appointments… that aren’t difficult? Is it?
Merely linking your doctor’s calendar to your phone calendar would give you the simplest way to know where to be and when. This is nothing new, that can be done today.
However, what if the AI linked to conversational interfaces could then tell you who made the appointment, who the consultant is they are meeting, why you are meeting them and even how to get there & book the relevant taxi or ambulance service? Could we start building a set of digital tools that aren’t just about productivity but also about more pastoral care elements like reduction of stress and anxiety, or increasing patient confidence?
Next, when we get to the appointment, we need to remember what to ask the GP or clinician and their response. Again, not difficult for young, able-bodied individuals with no cognitive impairments, but for MCI patients, well.
Most GP appointments last only 10 minutes. During this time, you need to understand, digest and remember what the doctor said. For someone with MCI, trying to recall this information could be difficult and distressing.
So could we start to look at how having an AI app listening could ingest the conversation, summarise it, store it in note form and play it back on request? Could the app also link to your EMR (Electronic Medical Record) to understand additional notes that were taken post-consultation and could you ask it to store what follow-on appointments have been made, and supply information relating to a therapy, treatment or drug details?
These are just a couple of examples of the application but the possibilities span much broader than just these fairly simple scenarios.
Most people don’t really know, nor understand that our smartphones and smartwatches can provide a rich stream of information if utilised in the right manner.
From the camera and microphone to the gyro meter and light sensor… and everywhere in between. These devices have the ability to create, capture and profile information about how you look, how you sound, how you move, and how you react. By combining this data with AI, it could not only track the deterioration of a condition, knowing when to activate certain events, such as when the parameters of MCI moved from normal tolerance to requiring a further medical examination.
This all might sound far-fetched, but it really isn’t – as we mentioned at the top of this article, AI is currently being used in therapeutic areas such as cancer treatment, where AI is being used to predict cancer from patient data. So it won’t be long until it finds its way into the everyday devices we interact with, and it will most likely have a place as a silent agent working to spot patterns in data and services the correct intervention at the correct time.
However, over time, once we ensure the right ethics, values and principles have been applied, why not allow it to come out of the shadows and be a digital companion that supports and benefits us in our day-to-day lives?
Most of what we are talking about in this article are hypotheticals, however developing prototypes and testing MVPs underpinned by large language models, will allow us to establish what the barriers to mass market adoption are.
Be the ethical, moral or related to access to 5G in remote places, they will likely to myriad. But building a brighter tomorrow doesn’t start tomorrow… it has to start today.
We help businesses co-create amazing digital products. If you are interested in exploring the potential of AI in platforms, applications and digital interfaces, please get in touch with our talented technologists.