Facebook says it took down 2 million terrorism posts in 2018

Share

Over the last 18 months Facebook has significantly increased the measures aimed at identifying inappropriate content and protect users, said Vice-President of product management, guy Rosen.

While the 583 million fake Facebook accounts and their removal is perhaps the biggest takeaway from this report, the company pointed out how the metrics of flagging and removal had improved when compared to previous quarters - such as improvements in photo detection technology that can detect both old and newly posted content.

The content audited included graphic violence, hate speech, adult nudity and sexual activity, spam, terrorist propaganda (IS, al-Qaeda and affiliates) and fake accounts.

The company removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, almost triple the 1.2 million a quarter earlier, the world's largest social network was quoted as saying in a published document. It said it estimates that between 7 and 9 views out of every 10,000 pieces of content viewed on the social media platform were of content that violated the company's adult nudity and pornography standards.

Under increasing pressure to disclose how it polices its platform, Facebook revealed it took down 837 million pieces of spam content between January and March of this year.

Facebook's report suggests its investment in AI that can help moderate objectionable content is slowly paying off.

Facebook took action against 2.5 million pieces of hate speech content during the period, a 56 increase over October-December.

China launches first domestically-made aircraft carrier
This gives them an advantage over the Chinese planes, which rely on their own power and the ski-jump designs on both carriers. Once it is in service, it will be able to accommodate China's Shenyang J-15 fighter jets.

Facebook previously estimated fake accounts as accounting for 3 percent to 4 percent of its monthly active users.

"We have a lot of work still to do to prevent abuse". By comparison, we removed two and a half millions pieces of hate speech in Q1 2018 - 38 percent of which was flagged by our technology. The problem with trying to proactively scour Facebook for hate speech is that the company's AI can only understand so much at the moment. "While not always flawless, this combination helps us find and flag potentially violating content at scale before many people see or report it".

Facebook is struggling to block hate speech posts, conceding its detection technology "still doesn't work that well" and it needs to be checked by human moderators.

It's also why we are publishing this information.

Facebook stepped further into its new era of data transparency Tuesday with the release of its inaugural Community Standards Enforcement Report.

"Today's report gives you a detailed description of our internal processes and data methodology". He added that Facebook welcomes feedback to the data.

Share