Science

This Study Looked at Whether Your Next Tweet Will Call for Revolution

The U.S. military wants to gauge a Twitterstorm before it happens, but will their predictions actually be correct?

Peter Macdiarmid

Who doesn’t love a good protest on social media? I don’t know how many thousands of tweets I sent out under #OWS during the height of the Occupy movement. By this point it’s a cliché to say activists in the United States and around the world have harnessed social networks to fuel social movement, but it’s true.

At the same time, law enforcement in the U.S. sees the power of Twitter as a grave threat in the hands of ISIS supporters. Twitter reported that during the past year, it shut down 125,000 allegedly ISIS-affiliated accounts in an attempt to blunt the militant group’s ability to groom recruits. The mass shutdown seems to have had some effect on ISIS’s Twitter reach, though any lasting impacts remain to be seen.

It’s not surprising, then, that the U.S. military is interested not only in monitoring social media, but in attempting to predict the size of tweetstorms before they can fully form. A new study partially funded by the Office of Naval Research and conducted by researchers at Arizona State University, Texas A&M, and Yahoo, found that they can predict with 70 percent accuracy whether a user’s next post will be a protest post.

Defense One covered the study earlier this month, and reported that the key factor in determining whether a user’s next post will be engagement with a social movement isn’t that user’s personal history. Rather, it’s the history of activists who have mentioned that user. The likelihood that an individual will join in an online protest goes up if “the post mentioning the user is related to the protest,” and “the author of the post mentioning the user is interested in the protest,” researchers Suhas Ranganath and Fred Morstatter told Defense One.

The math that makes up the predictive algorithm is over my head, but, basically what the researchers are saying is that if protest-affiliated friends reach out to Person X on social media, especially about a particular protest, the chance that Person X will post about that protest increases. The formula isn’t perfect, of course, but an accuracy rate of 70 percent isn’t bad considering how many variables factor into human behavior.

Monitoring and understanding social media trends isn’t just limited to the military. One company I’ve written about here, Geofeedia, offers a program to law enforcement and corporations that provides location-based social media monitoring. Is your CEO going to a major economic or climate summit sure to draw thousands of protesters? Geofence the area, and automatically monitor any geo-tagged post on Twitter, Facebook, or a half-dozen other networks.

Geofeedia even offers a service called “sentiment,” which the company claims can gauge the overall crowd mood and sense an upcoming — potentially violent — shift. Lee Guthman, Geofeedia’s head of business development, told me in an interview his program determines crowd “sentiment” by taking “all the words in the phrase, and it attributes positive and negative points to them, and then proximity of words to certain words.”

That’s not the same formula of the military-funded study, and some degree of skepticism is warranted about the scalability of the predictive power in both cases. Still, it’s obvious that governments and corporations around the world are vying to understand what some call the “firehose” of social media. These networks create far more information than any one person is capable of understanding, so humans rely on machines to sift, sort, and analyze the largest pool of information the world has ever seen.

Predictive programs that promise to convert the tangled jumble of social media into a digestible final product will be sold to the public as valuable tools against militant groups like ISIS. Or, at the state and local level, they will be rolled out as anti-gang initiatives. It is almost certain, though, that monitoring and weaponizing social media will disproportionately affect marginalized populations, activists, journalists, and others whose conversations should be free from monitoring.

And in an age when young people, especially angry young men, say all kinds of dumb stuff online, we’ll likely continue to see cops up-charge a kid for a vague post online. Take the case of Devon Coley. He was one of eight people arrested for making threats against cops in the wake of the killing of two NYPD officers in late 2014. Coley was taken into custody after posting an image – possibly from a movie – of someone shooting into a police car, along with an emoji of a gun pointed at a cop’s head. Ultimately a grand jury declined to indict Coley on charges of making a terroristic threat (though he was later rearrested after failing to appear in court on charges he stole a Citibike).

With each new attempt to use big data to predict behavior – whether political, criminal, or some combination of both – the danger grows that innocent people will get sucked up as well. The probability of that is close to 100 percent.

Related Tags