Tech

Facebook removed over 100 accounts spreading misinformation in Ukraine, Myanmar, and the U.S.

The three networks included Facebook and Instagram accounts, and Facebook Pages and Groups.

Bloomberg/Bloomberg/Getty Images

Today Facebook announced it has removed accounts associated with “coordinated inauthentic behavior” in Russia, Iran, Vietnam, and Myanmar. The company found 78 Facebook accounts, 11 Pages, 29 Groups, and four Instagram accounts that violated its “policy against foreign or government interference.”

“We’re constantly working to detect and stop this type of activity because we don’t want our services to be used to manipulate people,” Facebook stated in its announcement. “In each of these cases, the people behind the activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action."

Three networks identified — Facebook says the accounts were discovered as part of its internal investigations into Russia-linked, coordinated, inauthentic behavior. The first operation found originated in Russia, and targeted Ukraine and its neighbors. Another network based in Iran focused on the U.S., while a third group in Myanmar and Vietnam targeted audiences in Myanmar.

Of the identified accounts, six Facebook accounts, and five Instagram accounts — all originating from Iran — were seen to be interfering in the U.S.

Rehab for Facebook’s image — This coordinated ban is necessary, and serves as good PR for Facebook. Facebook’s evolving policies on misinformation have been the subject of intense scrutiny in the recent past — just last year, the company decided the answer to most misinformation was to give it “more context” rather than removing it.

The company has been working to rehabilitate the way the public views its misinformation policies by making strategic bans on some misinformation. This week Facebook also announced a partnership with Reuters to fact-check news articles (but not political ads).

Meanwhile, most misinformation lives free — In almost all cases, Facebook’s policy is still to label misinformation rather than actually remove it. This is a lax policy decision that doesn’t have much power to curb misinformation at all. Facebook has repeatedly said it's not a traditional media company and so doesn't need to be held to the same editorial standards. As it's the primary source of news for billions of users, though, that position is becoming increasingly hard to defend.

Facebook has also recently received criticism for its mediocre deepfake policy. But it's weathered not just criticism, but fines from the likes of the FTC, before, so it might be a little too optimistic to think new complaints will change anything, especially because Mark Zuckerberg says Facebook doesn’t care if its users like the platform.