Nudity, hate speech and spam: Facebook reveals how much content it kills

Nudity, hate speech and spam: Facebook reveals how much content it kills

During that period, Facebook removed 583 million fake accounts and 836 million spam posts, images, videos and comments.

Facebook released its Community Standards Enforcement Preliminary Report on Tuesday, providing a look at the social network's methods for tracking content that violates its standards, how it responds to those violations, and how much content the company has recently removed.

Facebook previously estimated fake accounts as accounting for 3 percent to 4 percent of its monthly active users.

In the first quarter alone, Facebook disabled roughly 583 million fake accounts, most of which were disabled within minutes of registration.

Facebook has faced a storm of criticism for what critics have said was a failure to stop the spread of misleading or inflammatory information on its platform ahead of the USA presidential election and the Brexit vote to leave the European Union, both in 2016.

These metrics may seem high, but when you consider the massive amount of both accounts and content that are going through Facebook each day, it's a relatively small fraction of the entirety of the social media platform. However for hate speech, Rosen said, "our technology still doesn't work that well".

National Assembly Inflates 2018 Budget To N9.120 Trillion
The delay had prompted President Muhammadu Buhari to issue a deadline to heads of all MDAs to defend their budgets unfailingly. The report was laid by Danjuma Goje , Chairman of the Senate Appropriation Committee and seconded by Senator Matthew Uroghide.

On Tuesday, May 15, Guy Rosen, Facebook's Vice President of Product Management, posted a blog post on the company's newsroom. The report also doesn't cover how much inappropriate content Facebook missed. "We tend to find and flag less of it, and rely more on user reports, than with some other violation types".

837 million pieces of spam were removed in Q1 2018, all of which were found and flagged by Facebook's systems before anyone even reported it.

"My top priorities this year are keeping people safe and developing new ways for our community to participate in governance and holding us accountable", wrote Facebook CEO Mark Zuckerberg in a post, adding: "We have a lot more work to do". "And it's created to make it easy for scholars, policymakers and community groups to give us feedback so that we can do better over time". That was less than 0.1 percent of viewed content - which includes text, images, videos, links, live videos or comments on posts - Facebook said, adding it had dealt with almost 96 percent of the cases before being alerted to them.

"We restricted access to content in India in response to legal requests from law enforcement agencies and the India Computer Emergency Response Team within the Ministry of Electronics and Information Technology". The company's reputation took a serious hit after news broke of their alleged role in facilitating questionable use of user data and they desperately need a win to help get them back on their feet.

The investigation follows revelations about Cambridge Analytica's collection of user data in March, after which the company was forced to admit it had allowed the data of tens of million of users to be mishandled.

Related Articles