Racist and sexist robot, the fault of the training on the web

Ansa Live at 8 (ANSA)

(ANSA) – ROME, JUN 24 – Even robots can become racist and sexist: this is demonstrated by an experiment conducted in the United States, where a robot has learned to act according to common stereotypes, for example by associating black people with crime and women to housework. The fault lies with his ‘brain’, a widely used artificial intelligence system trained with data taken from the web. The study, conducted by Johns Hopkins University with the Georgia Institute of Technology and the University of Washington, is presented at the Association for Computing Machinery (Acm) FAccT 2022 conference in South Korea. “The robot learned dangerous stereotypes through imperfect neural network models, “says study first author Andrew Hundt. “We risk creating a generation of racist and sexist robots”, warns the researcher, stressing the need to address the issue as soon as possible. The problem is that the developers of artificial intelligence systems for the recognition of people and objects usually train their neural networks using data sets available free on the Internet: however, many contents are inaccurate and distorted, therefore any algorithm built on the basis of these information is likely to be fallacious. The problem has been raised several times by tests and experiments that have shown the poor efficiency of certain artificial intelligence systems, used for example for facial recognition. Until now, however, no one has tried to evaluate the consequences of these flawed algorithms once they are used to govern autonomous robots operating in the real world without human supervision. (HANDLE).

Source: Ansa

Share this article:

Leave a Reply

most popular