According to recent study, deep learning models powered by artificial
intelligence can discern someone's race merely from their X-rays, which is
something a human doctor seeing the same photos would not be able to
achieve.
The results raise some unsettling concerns regarding the use of AI in
medical diagnosis, evaluation, and therapy. Could computer software analyze
photos like these with accidental racial bias?
An international team of health researchers from the US, Canada, and Taiwan
tested their system on X-ray images that the computer software had never
seen before (and had no additional information about). They had previously
trained their AI using hundreds of thousands of existing X-ray images
labeled with information regarding the patient's race.
Even when the scans were performed on individuals of the same age and sex,
the AI was able to predict the patient's claimed racial identification on
these photos with startling accuracy. With certain picture groupings, the
system achieved levels of 90%.
In their
recently released
study, the researchers state that they "aimed to conduct a thorough
evaluation of the ability of AI to recognize a patient's racial identity
from medical images."
"We demonstrate that across multiple imaging modalities, standard AI deep
learning models can be trained to predict race from medical images with high
performance, which was sustained under external validation
conditions."
The study confirms the
findings of an earlier
investigation that revealed Black persons were more likely to have symptoms
of sickness missed by artificial intelligence scans of X-ray pictures.
Scientists must comprehend why it is happening in the first place in order
to prevent it from happening again.
AI replicates human thought processes by nature to find patterns in data
fast. But this also implies that it could unintentionally harbor the same
prejudices. Even worse, it's hard to separate the preconceptions we've
weaved into them due to their intricacy.
Scientists are now unsure of the reason why the AI system is so proficient
at determining race from pictures that don't explicitly depict it. The
models surprised researchers with their ability to correctly identify the
race represented in the file, even when just a limited amount of information
was given, such as when bone density indications were removed or only a
small portion of the body was focused on.
It's likely that the system is detecting melanin—the pigment that gives
skin its color—in ways that science is still learning about.
"Our finding that AI can accurately predict self-reported race, even from
corrupted, cropped, and noised medical images, often when clinical experts
cannot, creates an enormous risk for all model deployments in medical
imaging," the
researchers write.
The study adds to a growing body of research showing that AI systems
frequently exhibit human prejudices and biases, including racism, sexism,
and other types of prejudices. Results that are skewed as a result of skewed
training data are substantially less valuable.
The great potential of artificial intelligence to process data far more
quickly than humans can, from illness detection methods to climate change
models, must be balanced against this.
Many issues about the study remain unresolved, but for now it's critical to
be conscious of the possibility that racial bias might manifest itself in
artificial intelligence systems, particularly if we're going to give them
greater authority in the future.
Leo Anthony Celi, a research scientist and physician from the Massachusetts
Institute of Technology, told the
Boston Globe
that "we need to take a break."
"Until we are certain that the algorithms are not making discriminatory or
sexist decisions, we cannot rush their introduction into hospitals and
clinics."
The research has been published in
The Lancet Digital Health.