Researchers test algorithm and find it is racially bias.
Researchers recently found that a commonly used healthcare algorithm that flags patients at high risk of severe illness and targets them for extra attention is racially bias against black patients. After examining the records of 6,079 black and 43,539 white patients through the lens of the software, a new study published in Science details how the algorithm was more likely to flag white patients for extra medical attention than blacks. This is an unintentional bias.
“This family of algorithms operates behind the scenes at nearly every health system in the U.S., said lead author Dr. Ziad Obermeyer of the University of California, Berkeley, School of Public Health. “They are used to screen millions of patients for important decisions – like who gets extra help with managing their chronic illnesses.”
Obermeyer explained, “The top 3% of patients in terms of algorithm risk score are auto-identified for enrollment in high-risk care management programs. This doesn’t guarantee they get in, but it’s a bit like a fast track.”
The problem is that the coding uses healthcare expenditures, rather than actual medical data, to identify the sickest patients.
“Black patients generate very different kinds of costs,” the researchers wrote. “For example, fewer inpatient and outpatient specialist costs, and more costs related to emergency visits and dialysis.” They added, “The reasons for that disparity are complicated and include a distrust of the medical system by African-Americans.”
Obermeyer and his colleagues are concerned that the algorithms are emphasizing the wrong factors when arriving at auto-generated conclusions. “This is obviously a critical activity for our health system to be doing. We want algorithms to help predict who gets sick, and to help us prevent illness before it happens,” Obermeyer said. “But we want the algorithms to do that in a fair way.”
After completing their study, Obermeyer and the others approached the company that created the algorithm. “The manufacturer independently replicated our analyses on its national dataset of 3,695,943 commercially insured patients,” the researchers wrote. Since then, the company and the researchers have been working together to eliminate any additional biases.
“Our results show that, while there is enormous scope for harm, we can also fix bias: by paying close attention to the technical choices we make when building algorithms, choices that are grounded in awareness of the deep social and historical inequalities that shape the data,” Obermeyer said.
The researchers have highlighted “the error of using cost as a surrogate marker for who has poor health and who needs most to have healthcare services wrapped around them,” said Dr. Cardinale Smith, an associate professor of medicine at the Icahn School of Medicine at Mount Sinai in New York City. “When you use this surrogate marker, you are creating a bias against racial and ethnic minorities who don’t always get care for the diseases they have and so tend to present later in the illness. If you take the example of cancer, we know minority patients (are less likely to) get the standard of care compared to their non-minority counterparts.”