Understanding bias in AI, and using a platform that addresses it proactively, are vital to your patients’ care.
“Technology Can’t Fix Algorithmic Injustice.” “Ethical Considerations In The Use Of AI Mortality Predictions In The Care Of People With Serious Illness.” These are just two of the news stories published in recent months about the issue of racial bias in artificial intelligence. Even if you’ve read each one in an effort to learn about the topic, it can be difficult to know how it applies to your patient care, and what you can do about it.
Racial bias is one of artificial intelligence’s skeletons in the closet, something all AI developers must contend with, but that few companies want to acknowledge exists. The topic is equally intimidating for hospice and palliative care providers to address because it can involve in-depth technological discussions that even industry insiders can have difficulty explaining. Nonetheless, there are crucial questions to ask any company offering an AI-informed solution that will impact your patient care.
“Providers need to ask their AI companies about bias,” said Dianne Gray, Chief Innovation & Patient Advocacy Officer at Acclivity Health. “‘What is it? How does it relate to me? Does your program contain bias?’ Hospice and palliative care teams want to provide whole-person care, and that means care that is based on a person’s physical, emotional, spiritual, and psychosocial needs. That means we have to recognize issues of race, inequity, and bias, so we need to ask companies providing AI solutions about how their platforms account for these factors.”
These are the questions you need to ask your AI supplier about racial bias.
1. What causes racial bias in AI?
Bias is defined as “prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair.” When you think of AI, you probably imagine a fast-moving algorithm based on data, a totally neutral and unbiased system — and this is true. Bias is introduced into AI, based on race, age, gender, and other factors, through health care data. That data is a reflection of patients’ real-world experiences with health care, which is informed by bias.
“AI itself has no bias; it’s a computer program,” said Duane Feger, Director of Health Economics at Acclivity Health. “But the problem of bias exists in AI models and nobody can avoid it — even IBM has canceled some of their AI projects because of biases they later discovered had worked their way into the model. The bias comes into play when you enter biased data into an AI model. Any biases in the data you’re using are manifested in the AI model itself. All health care data has underlying biases due to the fact that we don’t have equal access to care for many groups in our society. If you’re not receiving care, your data isn’t included in the databases, and that has a domino effect.”
“Because care is given in an unequitable way, the data from that care has an inequity built into it,” explained Robin Stawasz, Program Development Executive at Acclivity Health and committee member for the National Hospice and Palliative Care Organization (NHPCO). “That data brings forward any inherent inequity in how the care was provided. Because we acknowledge this, we can make efforts on the backend to account for some of that disparity.”
2. What are the risks of allowing racial bias to exist unchecked in an AI platform?
Now that you know how bias enters an AI system, what does that actually cause? When AI only has limited data sets to work with, it can produce models or make predictions about care that are extremely accurate for the population comprising that data. But for those who fall outside that population — most often those with unequal access to care — models may predict their care needs with less accuracy.
“A model may not be completely wrong, but it may have a larger margin of error (confidence interval) when you’re looking at groups that are not well represented in the data,” said Feger.
That’s why Acclivity Health strives to address any biases exposed by the platform, ensuring physician users know exactly how to apply it to their patients.
3. How does Acclivity Health address racial bias within its own platform?
Feger explained that the main challenge of correcting racial bias within an AI model is that you don’t know exactly how much bias is present in the underlying data. If more accurate data sets existed for comparison, you could simply feed those into your algorithm and correct the issue. Because there is no way to correct the data itself, Acclivity Health opts for full transparency so providers know when a data set is limited. This allows them to take any possible biases into consideration when treating patients.
“Because we do not have equal access to care in our society, all data includes biases, and any model built on that data will include those biases. It’s unavoidable. The key is to proactively inform health care professionals who use the models. The best thing we can do as an industry is provide disclosure,” said Feger. “At Acclivity Health, we ensure that we understand and acknowledge the fact that AI models can contain inherent biases. We address it to the best of our ability, and disclose the makeup of the training data so any of the providers or organizations we work with are fully aware of it. By doing this, we ensure they are not being deceived by a model that may be 99% accurate for one population but not for another.”
Stawasz added that many AI models learn biases because they analyze data that is more likely to be affected by social factors, including utilization of care and cost.
“AI also offers the opportunity to ‘un-bias’ the data, if you will, and there are some pieces of our platform that are more resistant to bias. If we know a population has underreported costs, we can adjust our system to compensate for that instead of fully relying on the provider to make that assessment. Most bias comes out of utilization of care and cost, but features like our palliative performance score analyze conditions instead, which are more resistant to bias,” she said.
4. How might racial bias in AI affect patient care in a hospice or palliative care setting?
When it comes to hospice and palliative care providers who use AI to inform patient care, the data sets will often apply to your patients.
“These models are extremely useful for you because they do reflect the makeup of the population moving through hospice or palliative care,” Feger said. “However, the model may be biased in favor of senior Caucasian people, which is the bulk of people moving through your care. In that respect it’s very useful. But when you receive patients who fall outside that ‘norm,’ be aware that you have to use more professional judgment.”
5. Why is it important to partner with a company that acknowledges and addresses racial bias?
Because bias creeps in to all AI platforms, it’s crucial for providers to work with companies that proactively to address it. Those who don’t could be allowing that bias to affect your patients’ care.
“It’s not a flaw with Acclivity or with AI, but with the American health care system. The question is: What are you doing about it?” said Feger. “Are you being transparent and explaining to your consumers how to use these models? Is the person producing these models proactively trying to improve the data set? Our team addresses bias to the best of our ability because it’s not enough to say, ‘Oh well, this is all the data that’s available.’”
“Our expert team is unique in that it acknowledges the topic of bias and constantly reviews the data and the way we program to investigate any opportunity for bias,” added Gray. “There are vast differences in how AI platform developers handle bias, and therefore not all are created equal. We are tackling ethical issues from the beginning and constantly reviewing for more opportunities to remove bias.”
If you want to learn more about how Acclivity Health proactively addresses bias within its platform, please email us at info@8kw.3ce.myftpupload.com.