Last updated on April 5th, 2024 at 10:50 am

A University of Oxford study highlights technology’s healthcare benefits and risks, while ethical concerns persist

Britain’s overburdened caregivers require every available support, but utilizing unregulated AI bots shouldn’t be part of the solution, argue researchers who advocate for a strong ethical framework in the AI revolution within social care.

A preliminary study conducted by scholars at the University of Oxford revealed that some care providers have been employing generative AI chatbots, such as ChatGPT and Bard, to devise care plans for individuals receiving care.

This practice poses a potential threat to patient confidentiality, as highlighted by Dr. Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organizations for the study.

“When you input any form of personal data into [a generative AI chatbot], that data is utilized to train the language model,” Green explained. “This personal data could be generated and exposed to someone else.”

She cautioned that caregivers might act on flawed or biased information, inadvertently causing harm, and an AI-generated care plan might be of substandard quality.

However, Green also pointed out potential benefits of AI. “It could assist with the heavy administrative workload and allow for more frequent revisits to care plans. Currently, I wouldn’t recommend it, but there are organizations developing apps and websites to do just that.”

Technology based on large language models is already in use by health and care bodies. For instance, PainChek is a phone app that utilizes AI-trained facial recognition to determine if a nonverbal individual is in pain by detecting subtle muscle movements. Oxevision, a system used by half of NHS mental health trusts, employs infrared cameras in seclusion rooms (for potentially violent patients with severe dementia or acute psychiatric needs) to monitor their risk of falling, sleep patterns, and other activity levels.

One project in its early stages is Sentai, a care monitoring system that utilizes Amazon’s Alexa speakers. It’s designed for individuals without 24-hour caregivers, reminding them to take medication and enabling remote check-ins by relatives.

Another initiative, led by the Bristol Robotics Lab, focuses on developing a device for individuals with memory issues. This device includes detectors that automatically turn off the gas supply if a hob is left on, as described by George MacGinnis, challenge director for healthy aging at Innovate UK.

“In the past, this would have required a visit from a gas engineer to ensure everything was safe,” MacGinnis explained. “Bristol is working on a system with disability charities that would allow individuals to perform this check safely on their own.

“We’ve also supported the development of a circadian lighting system that adjusts to individuals, assisting them in restoring their circadian rhythm, which is often disrupted in dementia.”

While individuals in creative industries express concerns about the potential for AI to replace them, the social care sector faces a different reality. There are approximately 1.6 million social care workers and 152,000 job vacancies, with an additional 5.7 million unpaid carers providing care for relatives, friends, or neighbors.

“People tend to view AI in binary terms – either it replaces a worker or we continue as we are,” explained Lionel Tarassenko, professor of engineering science and president of Reuben College, Oxford. “But it’s not like that at all – it’s about taking individuals with limited experience and enhancing their skills to match those of highly experienced professionals.

“I was personally involved in caring for my father, who passed away at the age of 88 just four months ago. We had a live-in carer. When we took over on weekends, my sister and I were caring for someone we deeply loved and knew well, who had dementia. However, we did not possess the same level of expertise as the live-in carer. These tools could have enabled us to reach a similar level of care as a trained, experienced professional.”

Nevertheless, certain care managers are concerned that adopting AI technology could inadvertently lead to rule violations and the potential loss of their licenses. Mark Topps, a social care professional and co-host of The Caring View podcast, noted that individuals in the social care sector are anxious that using technology might inadvertently lead to violations of Care Quality Commission rules, jeopardizing their registration.

“Many organizations are hesitant to take action until the regulator provides guidance, fearing potential backlash if they make mistakes,” he explained.

Last month, 30 social care organizations, including the National Care Association, Skills for Care, Adass, and Scottish Care, convened at Reuben College to discuss the responsible use of generative AI. Green, who organized the meeting, stated that they aimed to develop a best practice guide within six months and hoped to collaborate with the CQC and the Department for Health and Social Care.

“We aim to establish guidelines that the DHSC can enforce, defining what responsible use of generative AI in social care entails,” she said.