How ageist AI could affect the health of the elderly

As the healthcare industry begins to rely more on artificial intelligence, concerns arise as to how AI could introduce or aggravate ageism in healthcare.

Reuters Archive

Artificial intelligence has been in the spotlight for its ability to discriminate and reflect prejudices against groups of people, be it on the grounds of race, religion, or gender.

This, of course, is a result of the prejudices held by the people behind the AI - artificial intelligence is susceptible to the prejudices and discriminatory attitudes held by its creators.

Ageism - or prejudice and discrimination on the basis of age - is included in the list as the elderly are continuously neglected in the field of AI, thus excluding their experiences and concerns.

This was exactly the point of concern in a recent policy brief by the World Health Organization, which warned that ageism, when exhibited by AI, could have serious impacts on the health of the elderly.

“Specifically for older people, ageism is associated with a shorter lifespan, poorer physical and mental health and decreased quality of life,” WHO says, adding that it “can limit the quality and quantity of health care provided to older people.”

How can AI help care for the elderly?

AI technologies are being increasingly used in health sectors, and have great potential to improve healthcare for the old and ageing population.

To begin with, AI can have considerable predictive power when it comes to illnesses and potential health hazards when utilised for remote monitoring, replacing human supervision.

Through the instalment of health monitoring technologies and sensors in the environment, AI can track individuals’ activities and health status over long periods of time, all the while collecting data.

This continuous data collection would train the AI to detect unusual and potentially dangerous activities, giving the system an outstanding predictive ability of “disease progression and health risks for older populations,” while also personalising healthcare as each individual would be monitored separately.

These systems could predict whether the person is at risk of falling or other injuries, and even “assess the relative risk of disease, which could be useful for the prevention of noncommunicable diseases such as diabetes.”

In short, using AI for remote monitoring would improve diagnosis, while also tackling the pressing issues of understaffing and human error.

Artificial intelligence can also be utilised in drug development owing to its unique ability to work with large data sets and efficiency.

Despite the outstanding potential of AI, its use in caring for the elderly comes with a number of caveats, including the challenge of ageism.

How is AI ageist?

“AI algorithms can fix existing disparities in health care and systematically discriminate on a much larger scale than biased individuals,” warns the WHO.

AI, when used in the health sector, runs on “biomedical big data” - large, complex data sets that focus on health. These data determine how AI operates.

Human biases, however, can become embedded into these datasets as well as AI algorithms, especially when older people are excluded from the process.

The elderly often constitute a minority in data sets that do not specifically target the older population, which could lead to faulty datasets that neglect the experiences and concerns of the elderly.

This issue would be aggravated by the exclusion of older people from the design and programming processes as well, which has often been the case. This would further introduce ageism into AI, and even exacerbate it.

“The tendency is to design on behalf of older people instead of with older people. This can lead to inflexible uses of AI technology,” the WHO warns.

In other words, the AI itself would become “biased” with often flawed assumptions, leading to systemic discrimination toward the elderly.

The bias, in turn, could impair the quality of health care that the elderly receive by failing to be effective or leading to a wrong diagnosis.

Another caveat of utilising AI in healthcare is that seemingly good practises, like remote monitoring, can have negative consequences if proper care is not given.

Specifically, remote monitoring could aggravate ageism by reducing or sometimes even eliminating “intergenerational contact”, or communication between the elderly and their caregivers, as communication with groups that are discriminated against is a prevailing strategy in fighting prejudice and discrimination.

So, what should be done?

The WHO suggests that the primary strategy in mitigating or preventing these adverse consequences of utilising AI in healthcare is inclusivity.

Older people should take part in the data sets, data science teams, design, programming, development, use and evaluation of AI systems, the organisation asserts.

“Inclusion should also be intersectional, focusing not just on age but also on differences among older people, such as in gender, ethnicity, race and ability,” adds the policy brief.

The underlying idea is that the elderly should be “fully involved in the processes, systems, technologies and services that affect them.”

READ MORE: Can an AI model be Islamophobic? Researchers say GPT-3 is

Route 6