AI-Trained Robots Became Racist and Sexist
By: Andrew Cheng
In the recent development of AI and robots, an algorithm has been created to allow robots to identify people’s faces with an input of a specific word. This improvement will be beneficial in the future because it can overcome one of the biggest obstacles to AI development, allowing robots to recognize different objects in real life to carry out their specific functions without any commands from a person.
The algorithm creates an extensive database that is transferred to the AI. Then the AI uses the database provided to develop the relationship between the word input by the user and the pictures from the database. Finally, it will output the images the AI considers related.
However, some of the robot’s judgment has caused some problems. As mentioned above, the AI can output a result after the user inputs a keyword. But when keywords such as “homemaker” and “janitor” appear, the robots produced images of women and people of color.
“So, when you get to the point where robots are doing more … and they’re built on top of flawed roots, you could certainly see us running into problems.” Said Zac Stewart Rogers, a supply chain management professor from Colorado State University. Also, Vicky Zeng from Johns Hopkins University mentioned that a kid could ask at-home robots to fetch a “beautiful” doll and return with a white doll using the algorithm provided.
Experts are worried that if robots still have those judgments in the future, it will cause a huge problem. Some argue that robots are only machines and are not created with bias. However, challenges could arise from the database when the robot revives potentially racist and sexist messages from humans. Future studies will tell if scientists can filter the algorithms’ database to fix these issues.