As AI-powered tools make their way into healthcare, the latest research from UC Santa Cruz Department of Politics doctoral candidate Lucia Vitale takes stock of the current landscape of promise and anxieties.
AI advocates envision the technology will help manage healthcare supply chains, monitor disease outbreaks, make diagnoses, interpret medical images, and even reduce equity gaps in access to care by offsetting shortages of healthcare workers. health. But others are sounding the alarm on issues such as privacy rights, racial and gender biases in models, a lack of transparency in AI decision-making processes that could lead to errors in patient care and even the possibility of insurance companies using AI to discriminate against people. in poor health.
The types of impacts these tools will ultimately have will depend on how they are developed and implemented. In an article for the magazine. Social Sciences and MedicineVitale and her co-author, University of British Columbia PhD candidate Leah Shipton, conducted an extensive literature review of the current trajectory of AI in healthcare. They argue that AI is positioned to become the latest in a long line of technological advances that ultimately have limited impact because they engage in a “politics of avoidance” that diverts attention from more fundamental structural problems in health. global public, or even worsens them, .
For example, like many technological interventions of the past, most AI being developed for health focuses on treating diseases, ignoring the underlying determinants of health. Vitale and Shipton fear that the hype about unproven AI tools could distract from the urgent need to implement low-tech but evidence-based holistic interventions, such as community health workers and harm reduction programs.
“We’ve seen this pattern before,” Vitale said. “We continue to invest in these technological silver bullets that don’t really change public health because they don’t address the deep-seated political and social determinants of health, which can range from things like health policy priorities to access to healthy foods and a safe life. place to live.”
AI is also likely to continue or exacerbate patterns of harm and exploitation that have historically been common in the biopharmaceutical industry. An example discussed in the paper is that ownership and benefits of AI are currently concentrated in high-income countries, while low- to middle-income countries with weak regulations may be targeted for data mining or experimentation. with the deployment of new potentially risky technologies. technologies.
The paper also predicts that lax regulatory approaches to AI will continue to prioritize intellectual property rights and industry incentives over equitable and affordable public access to new treatments and tools. And since corporate profit motives will continue to drive product development, AI companies are also likely to follow the health technology sector’s long-term trend of overlooking the needs of the world’s poorest people by decide which topics to focus on to invest in research and development.
However, Vitale and Shipton identified a bright spot. AI could potentially break the mold and create a deeper impact by focusing on improving the healthcare system itself. AI could be used to allocate resources more efficiently between hospitals and for more effective patient triage. Diagnostic tools could improve efficiency and expand the capabilities of general practitioners in small rural hospitals without specialists. AI could even provide some basic but essential health services to fill job and skill gaps, such as providing prenatal checkups in areas with growing maternity care deserts.
All of these applications could potentially result in more equitable access to care. But that outcome is far from guaranteed. Depending on how and where these technologies are implemented, they could successfully fill gaps in care where there is a true shortage of healthcare workers or lead to unemployment or precarious jobs for existing healthcare workers. And unless the underlying causes of health worker shortages are addressed (including burnout and the “brain drain” to high-income countries), AI tools could end up providing diagnoses or outbreak detection that They are ultimately not useful because communities still lack the capacity to respond.
To maximize the benefits and minimize the harms, Vitale and Shipton argue that regulation must be implemented before AI expands further in the healthcare sector. The right safeguards could help divert AI from following harmful patterns of the past and instead chart a new path that ensures future projects align with the public interest.
“With AI, we have the opportunity to correct the way we govern new technologies,” Shipton said. “But we need a clear agenda and framework for the ethical governance of AI health technologies through the World Health Organization, important public-private partnerships that fund and deliver health interventions, and countries like the United States, India and China that host technological companies. To implement it, continuous promotion of civil society will be necessary.