While Facebook CEO Mark Zuckerberg might be trying to present a more accountable, transparent, and ethically sound image in front of American lawmakers, disturbing content and misinformation continue to flood his social network. According to the company’s latest content moderation report, Facebook removed 3.2 billion fake accounts between April and September in 2019. That’s more than twice the number of fake accounts Facebook slashed off between April and September in 2018, which was 1.55 billion.
The company reported that it took down at least 11 million posts involving child pornography and abuse on Facebook. It further stated that 754,000 posts depicting similar content were removed from Instagram. As it stands, advocacy groups argue that Facebook’s encryption strategies make it difficult for law enforcement authorities to detect child abuse content. This is on top of reports that the company’s labor practices for its content moderators (often hired by third-party contractors) have left them overworked, poorly paid, and traumatized.
Instagram under the microscope — Activists have long decried Facebook for its fake account problem and how it enables misinformation across the platform. The same problem appears to have taken place on Instagram, according to Reuters. Facebook’s report reflected that it apparently went on a cleaning spree on Instagram where numerous posts were flagged for misinformation while posts associated with terrorist groups were proactively cut down 92.2 percent of the time.
Little to no progress — Ever since the 2016 presidential race happened, Zuckerberg has tried to curry favor with disgruntled lawmakers about Facebook’s supposed commitment to accuracy and safety. But it’s painfully evident that the biggest social platform on earth is still struggling with some hellish issues. If its latest content moderation report indicates anything, it’s that Facebook might be “quick and easy” but it’s certainly not safe.