Brain Our People Parkinson's Disease Research

To monitor or not to monitor? A conversation on AI health surveillance

Image by Pexels.

Artificial Intelligence (AI) technology is rapidly expanding in health care, especially in the area of AI health surveillance, or monitoring.

Health monitoring refers to tracking a person’s health-related data to help prevent and treat disease before it can negatively impact their health. Health surveillance takes health monitoring a step further, where an individual’s health is being monitored whether they are asked to be tracked or not.

When the pandemic hit, the need to monitor people’s health remotely increased significantly and so did the development of several AI COVID-19 tracking algorithms.

But what are the ethical implications of using AI in health surveillance? CHÉOS Scientist, bioethicist, and author Dr. Anita Ho shares insights from her recently published book, Live Like Nobody is Watching: Relational Autonomy in the Age of Artificial Intelligence Health Monitoring.

CHÉOS Scientist Dr. Anita Ho

How will AI-powered health monitoring impact the future of health care?

“In a nutshell, AI will affect health care systems as a whole by influencing where, when, how, what kind of care will be delivered and by whom. During the pandemic, remote health care visits and the use of various technologies to monitor one’s health at home became normalized. With AI predictive analytics, people may feel more empowered because they have the tools to assess themselves at their own convenience. If such data are accurate and can be sent to people’s care providers, it is possible that AI monitoring technology could reduce the number of routine health visits because AI monitoring can send alerts for physicians to see patients in person only when their health is predicted to be declining.”

What are some implications of integrating health surveillance with current monitoring technology?

“One of the chapters of my book talks about direct-to-consumer technologies where people may bypass their doctors and get health apps or other devices to monitor themselves. There are already recreational or wellness devices on the market, mostly fitness trackers, that continuously gather activity or behavioural data. There are also devices that record and analyze physiological data such as heart rates or blood glucose levels. They could be helpful to inform medical decisions because they can analyze longitudinal data about a person’s activities and symptoms. These direct-to-consumer AI devices may give people more control over what information they want and when they want it, and it may also allow for a fluid way of thinking about one’s health and self-management. But we also need to be cautious about whether the constant tracking and alerts may create ‘worried wells,’ getting even healthy people more worried about little changes that aren’t clinically significant.”

What concerns are there around data sharing and privacy when using direct-to-consumer technologies?

“The data currently collected by personal devices are often owned by the company selling the device, which poses questions of how much control people truly have of their own data. Additionally, these devices collect and connect so much data in a ‘net of surveillance,’ even when there are questions of whether they are producing useful information to be shared with health care providers. On top of that, health care information collected by doctors, labs, etc. is private and regulated, but app stores often do not have privacy protection policies, so there also needs to be consideration of what actually counts as personal health information that may be protected by various privacy regulations, and whether consumers would want more protection for behavioral and recreational information that device companies are collecting.”

How could health surveillance technology impact patients’ ability to make choices about their own health care?

“Between the popularity of personal fitness tracking devices and the normalizing of disease-tracking during the pandemic, health care systems may increasingly expect people to monitor themselves using various kinds of devices. Many have argued that personal health tracking can democratize personal health information and promote better care. Essentially, if the information is clinically relevant, and patients can see it whenever they want, they can share that information with their doctor, or decide based on the algorithmic analysis if a doctor visit is required. It’s like flipping the script from ‘the doctor will see you now’ to ‘you will see your doctor now’, which has the potential of giving more power to the patient to make decisions about their own health.”

How do you see AI supporting care in long-term care facilities where health monitoring is an integral part of residents’ well-being?

“The question here is to think about how we are using AI – is it being used to enhance care or to replace interactions people are dependent on? Although studies in people with dementia show that interactions with any kind of devices could at least help with engagement, and we may be able to use these tools to collect information that can predict and alert health changes, we need to be cautious of substituting human monitoring with AI. I have been working on a CIHR-funded project interviewing health care providers, people living with Parkinson’s Disease, and family caregivers. Some of them have shared concerns about using predictive monitoring technologies to replace human interaction. For example, if the technology is showing that an older adult is doing just fine based on the reading of vital signs alone, then there may be less incentive to actually check on the long-term care resident. This could carry on beyond health care providers to family members who want to see how their loved one is doing, but if there are no indications from the AI that the person’s health is worsening, then there might be less inclination to visit. Social interactions contribute significantly to the well-being of residents in long-term care, so it is important that AI plays a supporting, not a replacing role.”

Could AI health surveillance technology influence trust between a patient and their health care provider?

“Absolutely! In my book, I discuss medication tracking, especially in mental health and pain medications, which could be a double-edged sword. On one hand, people who use tracking devices may have data as evidence to prove that they are taking the medication as instructed, which can build trust between them and their care provider. On the other hand, if medication use needs to be tracked for the patient to be believed, where is the trust with the patient? Do we even need patient reports if we could simply see the numbers? Health care is far more than just data and numbers, and trust with one’s health care provider is essential in good therapeutic relationships.”

You have a chapter on home-health monitoring in your book. What ethical considerations are there when implementing home-health monitoring technologies?

“Home-health monitoring can provide convenience, especially for those with mobility concerns and who live in rural areas. There is also the additional benefit of allowing for more longitudinal health information tracking which could be useful for preventive care. If it sends an alert to a person indicating that they may be unsteady on their feet and need to hold on, it could help to prevent falls. However, despite the potential benefits of collecting continuous information about people’s health progression, we need to consider that one’s home is their private space. If you have sensors and devices all over your body and your house, does that leave a moral space between what is your private intimate place and what is health care monitoring observation? That line can easily get blurred when monitoring technologies medicalize one’s home. For people with cognitive decline, there are also ethical questions of how they can withdraw consent of being continuously watched and analyzed. If monitoring practices become more pervasive and normalized, people may no longer have the option to not be watched, in the name of protecting their safety.”

The popularity of accessible AI tools, such as ChatGPT which receives 10 million queries daily, has brought AI technology into the home. How do you think this could impact health care?

“One can assume that as people become more comfortable using AI tools, they start trusting the technology. Chatbots that use large language models are trained on data that have been uploaded to them. One of my key concerns for using AI chatbots to answer health questions is how the algorithms are learning new information on their own. When information is uploaded to chatbots for medical purposes, it needs to remain accurate, updated, and as unbiased as possible. The developers and technologists creating AI health tools need to ensure that the AI is indeed trustworthy and safe. Chatbots can predict the most apt response based on the huge amounts of data without understanding what is factual or not, and there have been reports of chatbots “hallucinating,” making up court cases and other assertions that did not happen. Nonetheless, some hope that if trained and deployed properly, chatbots may help raise the bar in health care delivery. A recently published JAMA article compared the responses between ChatGPT and physicians to patient questions on a social media platform. The researchers found that not only did ChatGPT give higher quality answers, its responses were rated as empathetic nearly 10 times more frequently compared to the physicians’ responses. Certainly, physicians responding on social media forums do not have a therapeutic relationship with questioners, so they may not be trying to connect with or support the questioners. Nonetheless, perhaps this study can motivate more discussions on professional education on empathetic communication.”

Are there current government regulations in place guiding the use of AI technology?

“Several governments have been developing frameworks to help guide the use of AI technologies, with the European Union being the closest to having regulations with the EU AI Act. In Canada, there is no regulatory framework specific to AI yet, but the proposed Artificial Intelligence and Data Act (AIDA) would set the foundation for the responsible design, development, and deployment of AI systems. There needs to be regulatory processes or other governance structures in place to make sure that there is evidence showing that the AI health surveillance technology works effectively in a variety of populations. This may include an audit trail showing that it works equally well among different populations. Then after implementation, especially for adaptive algorithms that gradually change predictive outputs based on feedback or new data, there should be additional checks and balances to ensure that the technology continues to be effective because it is working with vast amounts of private health data and can have an impact on people’s well-being.”

Broadly implementing new health care interventions can often be met with hesitancy and reluctance, especially from underserved populations. How could this affect AI health surveillance technology?

“AI technology is continuously learning, based on what data it is collecting. In health care, the more diversity there is in the data, the more information the technology can work with and learn from to apply to these different populations. But the algorithm cannot learn from information it doesn’t have. If the dataset is trained on a homogenous population, it can have more errors for excluded populations. It is feasible to imagine that populations who have been underserved, who have been marginalized socially, who don’t trust the system, or who have less health or technological literacy due to social disadvantages, would not want to be involved. This reluctance to participate in AI health research and development would mean that their data is missing, and they will, unfortunately, remain not well-served because the health system doesn’t have enough data on them. The vicious cycle could perpetually continue to underserve these populations. This is one of the reasons why health systems cannot solely rely on AI health surveillance technology to improve health outcomes for all. AI data cannot tell us the whole story, because we will be missing a lot about what it means to be living a healthy life, what it means to be sick, and what it means to be experiencing illness and suffering in an unequal world.”

What needs to be considered when integrating AI technology into health monitoring?

“Currently, AI health monitoring technologies are still in developmental stages and have yet to show clear clinical value. We are assuming that these technologies can do a lot of work for us in the future, but we need to consider how best to ensure proper research and governance in developing, testing, and integrating these technologies moving forward. Not every health problem is best solved with AI. Health systems really need to think about what types of AI technologies are most helpful for what problems, so that they can invest accordingly, integrate these technologies to support health care providers with useful information to help guide patients’ care, and help patients feel more empowered about their own health. We need to avoid the possible danger of relying on or presuming too much from AI to inform health care decisions. Humans in an unequal world are more complex than algorithms so it is important we do not rely entirely on numbers; rather, we need to continue considering the social environment contributing to people’s ability to live a healthy life.”

This story is excerpted from the original written by Allison Muller at CHÉOS. Read the full story here.