Artificial Intelligence

Facebook confirmed that it uses artificial intelligence to identify and remove posts that incite hate and violence


Facebook confirmed that uses artificial intelligence to identify and remove posts containing hate speech and violence, but the technology doesn’t really work, according to internal documents reviewed by the Wall Street Journal.

Senior Facebook engineers say the company’s automated system only removed posts that generated only 2% of the hate speech seen on the platform that violated its rules, the Journal reported Sunday.

Another group of Facebook employees came to a similar conclusion, saying that the AI ​​of Facebook only removed posts that generated between 3% and 5% of speech hate on the platform and 0.6% of the content that violated Facebook’s rules on violence.

Sunday’s report in the Journal was the last chapter of his “Facebook Archives” who found the company turning a blind eye to its impact on everything from the mental health of girls using Instagram to misinformation, human trafficking and gang violence on the site. The company has called the reports “mischaracterization.”

Facebook CEO Mark Zuckerberg said he believed Facebook’s artificial intelligence could remove “the vast majority of problematic content” by 2020, according to the Journal. Facebook defends its claim that most of the platform’s hate speech and violent content is removed by its “super-efficient” AI before users see it.

Facebook’s report from February this year stated that this detection rate was above 97%.

The latest findings in the Journal also come after former Facebook employee and whistleblower Frances Haugen met with Congress last week to discuss discuss how the social media platform relied too heavily on artificial intelligence and algorithms.

Because Facebook uses algorithms to decide what content to show to its usersThe content that you are most engaged with and that Facebook subsequently tries to push its users with is often angry, divisive and sensational posts that contain misinformation, Haugen said.

“We should have human-scale software, where humans have conversations together, not computers that facilitate who we can listen to”Haugen said during the hearing.

Facebook’s algorithms can sometimes have trouble determining what is hate speech and what is violence, leading to harmful videos and posts being left on the platform for too long.

Facebook removed nearly 6.7 million pieces of organized hate content from its platforms from October to December 2020. Some deleted posts involved organ sales, pornography and firearm violence, according to a Journal report.

However, some of the content that their systems may miss includes violent videos and recruiting posts shared by people involved in gang violence, human trafficking, and drug cartels.

Disclaimer: This article is generated from the feed and not edited by our team.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.