Twitter made it tedious to share misinformation and that actually worked
How many posts Twitter labeled from Oct. 27 to Nov. 11 that were flagged by fact-checkers.
Twitter says recent product changes were successful at curbing the spread of election misinformation. The company placed labels on more than 300,000 posts from October 27 to November 11 for sharing information that was disputed by bipartisan fact-checkers. Before users could retweet a message that was flagged, a bold alert would display indicating the contents of the tweet were in question.
Twitter also changed the retweet function so that users would be asked to quote tweet a flagged post and provide their own commentary before sharing, adding an extra step. If a tweet included an article URL, users would be asked to read the article before quote tweeting.
Overall tweets fell — This added friction resulted in retweets and quote tweets falling by 20 percent, and overall tweet volume falling in aggregate. “This change slowed the spread of misleading information by virtue of an overall reduction in the amount of sharing on the service,” Twitter said. That goes to show at Twitter's scale, minor changes can have a big impact on activity. The company plans to leave the prompts in place for now while it further studies their impact.
Twitter has been much more willing than Facebook to penalize users who violate its terms of service, most recently banning Steve Bannon over a video that suggested government officials should be beheaded. Facebook deleted the video but not Bannon's account, with CEO Mark Zuckerberg saying he hadn't violated enough of its policies. Twitter has slowly been redefining itself as a company that cares about engagement more than user growth. By convincing Wall Street that it doesn't need to become a behemoth, Twitter has more freedom to accept its position as a small company and not try to appease to everyone like Facebook does.
Both Twitter and Facebook were aggressive about combatting misinformation over the past week as pressure mounted for them to act against any attempt to discredit the election outcome. President Trump's Twitter feed was blanketed with labels that hid his tweets and required users to click a button to see them. Facebook labeled many posts and put a temporary ban on political advertising while Trump continues to spend money disputing the results. Twitter last year stopped accepting political ads altogether.
Trump has tried to retaliate against Twitter for its labels by signing an executive order to curb the protections social media companies have to freely moderate content. Now that he's lost the election, however, it's unclear whether any changes will actually come to fruition. Biden has been less committal about taking any action against social media companies.
It's unclear how effective labels are on their own, or whether the extra step Twitter added to sharing is necessary to slow the spread of misinformation.
Platform changes — Critics have long blamed Facebook and Twitter for acting too slowly to curb misinformation. Twitter's announcement today highlights how the inherent nature of these online platforms is partly to blame.
At their respective scales, it's too hard for human moderators alone to catch harmful content before it's already been shared widely. Algorithms on both sites help sensational content spread quickly as users share it without conducting much scrutiny, and machine learning isn't always great at catching borderline violative content. But adding an extra button press or two and alerting users that a post might be false makes them think twice about sharing it.
Changing the inherent ways in which the platforms function is necessary in slowing the spread of questionable content — and it can appear less biased than manually taking action, something Facebook in particular has tried to avoid lest it face the ire of conservatives. Twitter has proven that the way social media allows anyone to spread information at unprecedented speed and without gatekeepers is a big part of the issue. In the days of traditional media, fringe ideas didn't get much traction because the barriers to publishing were much higher and consequently the incentives to make sure facts were correct were similarly high.