Biased Artificial Intelligence
By: Nicholas Huang
In studies conducted by universities such as John Hopkins University and the Georgia Institute of Technology, robots were asked to scan people and find the “criminal.” They consistently put a black man in the “criminal” category.
The virtual machines used a popular artificial intelligence algorithm to sort through billions of images and captions. When asked to identify “homemakers” or “janitors” out of a group of people, the robots consistently chose women and people of color.
These biases that are inherent in artificial intelligence algorithms could have many consequences as people scramble to create new technology. Once robots start getting more and more complicated, the underlying bias will be more and more obvious. “With coding, a lot of times you just build the new software on top of the old software,” according to a Colorado State University professor. “So, when you get to the point where robots are doing more … and they’re built on top of flawed roots, you could certainly see us running into problems.”
There are many researched cases of artificial intelligence being biased. Some machines targeted innocent Black and Latino people for crimes they didn’t commit.
Robots have avoided scrutiny because of their simple tasks, which means their biases could go unnoticed and make subtle changes which could impact people’s lives.