Biased Self-Driving Cars
We already knew that some facial recognition systems struggle to accurately identify people with darker skin.
Now we know that many of the artificially intelligent systems designed to help autonomous cars navigate roads have the same problem — and the result could disproportionately endanger pedestrians with darker skin, a troubling sign of how AI can inadvertently reproduce prejudices from the wider world.
In a new paper published on the preprint server arXiv, researchers from the Georgia Institute of Technology detail their investigation of eight AI models used in state-of-the-art object detection systems. These are the systems that allow autonomous vehicles to recognize road signs, pedestrians, and other objects.
They tested these models using images of pedestrians divided into two categories based on their score on the Fitzpatrick scale, which is commonly used to classify human skin color.
According to the researchers’ paper, the models exhibited “uniformly poorer performance” when confronted with pedestrians with the three darkest shades on the scale.
On average, the models’ accuracy decreased by 5 percent when examining the group containing images of pedestrians with darker skin tones, even when the researchers accounted for variables such as whether the photo was taken during the day or at night.
The Georgia Tech team’s research suggests that we could be on a path to a future in which a world rife with autonomous cars isn’t as safe for people with dark skin tones as it is for lighter-skinned pedestrians.
Thankfully, based on their study, they were able to figure out what we need to do to avoid a future of biased self-driving cars: start including more images of dark-skinned pedestrians in the data sets the systems train on and place more weight on accurately detecting those images.