Medicine should strive to be “race-conscious without being racist,” said David Jones, MD, PhD, during a webinar on racism in healthcare artificial intelligence (AI) Tuesday, which was sponsored by the National Institute for Health Care Management.
“Should doctors treat patients of different races differently because of their race and ethnicity? They clearly do,” said Jones, a professor of epidemiology at Harvard University. “Researchers have documented many regrettable cases in which disparities in medical treatment reflect structural institutional or interpersonal racism.”
However, “race-based medicine often has a defensible logic,” he continued. “Researchers have documented race differences in disease prevalence and therapeutic outcomes. Clinicians, in response, have factored race into diagnostic tests, risk calculators, and treatment guidelines.” But, unfortunately, “many of these race-adjusted tools direct medical attention towards white patients. This is perverse at a time where people of color suffer higher mortality rates for many diseases.”
Bias in Risk Calculators
Jones gave the example of a “stone score” used to predict risk of kidney stones in patients with flank pain. In the scoring algorithm, “non-Black race is weighted as heavily as having blood in your urine,” he said. “This systematically assigns white people to a higher risk category and directs medical attention and resources — in this case, a CT scan — towards them.”
Another example is a tool that predicts whether a pregnant person should attempt a vaginal birth after they’ve had a prior cesarean section. “The researchers studied outcome data from over 7,000 women, and found several factors associated with a high risk of bad outcome, including weight, Black race, Hispanic ethnicity, insurance status, marital status, and tobacco use,” Jones said. “But when they put these findings into their risk calculator, they included weight, race, and ethnicity, but not the other factors, as if somehow they felt race was more important than the others.”
Jones and colleagues published a paper in August 2020 on the topic, and included examples of 13 tools from different specialties. “Several of these required a very thin evidence base; many of the tools rely on a dichotomous variable: Black or non-Black,” he said. “We feared that these tools, if used as directed, would exacerbate health disparities.”
Effect of NFL Controversy
Jones noted that tensions were inflamed recently with the revelation that the National Football League was using race-specific norms on psychological tests to deny concussion settlements to retired Black players. “These efforts prompted and energized congressional hearings and inquiries,” he said. “Many medical societies responded and re-examined their use of race, and committed themselves to an anti-racist agenda. Several announced plans to stop using race in their clinical algorithms,” with two of the most prominent tools — a kidney function test and the vaginal birth risk calculator — since being reformulated without race as a factor.
“But race cannot be ignored,” he added. “Health inequities are ubiquitous in medicine. We must study race and racism if we are to eradicate inequity.”
The fundamental question is, “are humans basically the same or not?” said Jones. “Medicine currently operates on the assumption that we are different, and that doctors can improve outcomes by focusing on race differences.” There certainly are some relevant differences between people of different ancestries; for example, “Tay Sachs disease is more common in people of Ashkenazi [Eastern European Jewish] ancestry, while sickle cell trait is more common amongst people with a West African ancestry.”
“But there is a danger in applying these differences carelessly,” he continued. “Even though sickle cell trait is 25 times more prevalent in African Americans than in white Americans, most African Americans don’t have it, and it would be wrong to treat all African Americans differently because of a trait carried by a minority of them.”
In addition, the current way of categorizing people by race is also out of date, according to Jones. “Imagine three patients that a clinician might meet: one descended from enslaved Africans, another a second-generation Ethiopian immigrant, and a third, a student from a wealthy family from West Africa,” he said. “These people might have little in common in terms of their lived experiences of racism, and little shared ancestry as well. But our healthcare system would label them as one type — Black — distinct from all other humans. And this simply makes no sense.”
“I am not calling for color-blind medicine,” Jones concluded. “As long as race and racism determine access to wealth, health, and social resources, we need to study them. I support race-conscious medicine; we just need to figure out the best way for medicine to be race-conscious … If we think human differences are important and should inform medical practice, then we need to invest the resources required to map and understand those differences.”
Employing AI in Everyday Practice
Rajeev Ronanki, chief digital officer of health insurer Anthem, Inc., explained how his company — the second-largest insurer in the U.S. — uses artificial intelligence to improve patient care. Using a dataset that combines information on the patients’ social determinants of health with claims data and lab data, “today we’re able to predict roughly 4,000 unique things about a person’s future healthcare journey — everything from the likelihood of developing an opioid substance use disorder or developing a chronic condition like hypertension or diabetes in the future, to the risk of falling, the risk of skipping an appointment, and the risk of missing a critical gap in care,” he said.
“We’re integrating our insights into the electronic medical records and other points of data and workflow in a physician’s office or hospital system to get them to see the whole picture of health,” he continued. Physicians can then consider, “Does this patient that I’m seeing have access to transportation to come see me for my next appointment, or does this person have enough time in their day to spend their time exercising and sleeping well?”
Anthem has also expanded its value-based care program “where we align the incentives and the outcomes for each provider based on the number of actions that they’re taking on the insights that are being created by our algorithms,” he added.
But in using the data in this way, “there’s a critical dependency and making sure that we’re accounting for things like bias … to make sure that the insights that we’re surfacing are actually representative of the population, and that we’re not missing anything because we’re simply using machine learning techniques to learn from perhaps what’s an embedded bias in the system already,” said Ronanki. Anthem has therefore created an Office of Responsible Use of AI and Ethics, “and this group continuously looks at data, but tests for bias, as well as using several other techniques” to ensure the algorithm is working well.