Facebook claims that it has disabled its Artificial Intelligence functionality, after it has cataloged a video with black people in a racist way.
The video was published by The Daily Mail last June, and shows several black men confronting white police officers; once the playback is finished, Facebook displays an automatic message, in which it asks the user if they “want to continue watching primate videos” .
The scandal comes a year after protests organized around the world by movements like Black Lives Matter, which gained traction after the murder of George Floyd by a white policeman. But Facebook did not react to this problem until last weekend, when The New York Times published a public apology from the company, in what it considers “an unacceptable error.”
Facebook has explained that the message was not entered by any employee, but that it was an Artificial Intelligence system, which analyzed the video and concluded that it was about primates. Consequently, Facebook has disabled the system , which it knows is “not perfect”, pending an investigation to ensure that the problem will not occur again.
The scandal reveals not only the dependence of sites like Facebook on automated systems, but also the great racist problem of these developments throughout the industry. Machine learning, the process by which an AI “learns” to do things on its own, relies on analyzing a large amount of data. Therefore, its effectiveness will be as good as the data provided.
Activists have denounced on many occasions that the technology industry does not use minority data to teach its systems, and the latest problem with Facebook is one of the consequences; for example, if a person recognition system is trained only with white people, it will only recognize them as “person” and the rest will be “animals”.
It is more than a matter of offending minorities. Machine learning is already being used in systems that can change a person’s life if misidentified; for example, with facial recognition used by more and more police forces.