childhood bully

New app will help keep you from making a fool of yourself at work

Could the internet be a nice place after all?

A woman is being cyberbullied. Several people insult women and shoot illegally, violating privacy.
Shutterstock

What if there was an A.I. that could keep you from making a fool of yourself in a company-wide email? In the future, a new app called BullStop designed to detect cyberbullying and harassment could do just that.

What's new — A research team from the University of Aston in the U.K. has launched a new app this month called "BullStop" which is designed to proactively protect children and adolescents from a potential barrage of online harassment. This app is available on the Google Play store and currently restricted to only evaluating and blocking messages on Twitter. In the future, Semiu Salawu, a Ph.D. student at Aston and the project's lead researcher, tells Inverse that they plan to expand the scope of the app to other social media platforms and even more bespoke environments like individual workplaces.

There is not currently published research on this project, but Salawu says to expect several papers by the end of 2020.

"Because social media has no off switch, neither does cyber-bullying"

What's the history -- For many people across the world, especially teens and young adults, social media platforms like Twitter, Instagram and now TikTok have become an essential place for socializing, communicating and, unfortunately, harassing. While in-person bullying a few decades ago was not necessarily less brutal than the digital harassment of today, that type of harassment often had a time limit enforced by when school or extracurriculars stopped for the day.

Digital harassment, on the other hand, does not bend to the whim of physical distance or time: through our devices, this kind of harassment can follow us while we're hanging out with friends, eating dinner with our families, or even getting ready for bed. Because social media has no off switch, neither does cyber-bullying. A 2018 Pew Research study reports that a majority of teens (59 percent) experience cyberbullying.

The suicide of Megan Meier in 2006 following harassment on the social media site Myspace helped mark a turning point in this new, ever-present, form of harassment. Meier's death came just three years after the launch of Myspace. In more recent years, cyberbullying has also taken the lives of social media stars and influencers, including K-Pop idol Sulli in 2019. This is predicted to only worsen during the Covid-19 pandemic.

According to a Pew Research study, 59 percent of teens are affected by cyberbullying.

Shutterstock

How does it work -- Social media platforms have attempted to address online harassment by allowing users to block certain words on their accounts, but Salawu tells Inverse that this approach is inherently limited.

"A keyword search is based on a lexicon dictionary or wordlist," says Salawu. "The problem with that approach is that it's quite easy for people to beat that, [for example] if I elongate words or swap characters. The other thing is that word lists tend to focus predominately on profanity, [but] it's perfectly easy to bully anyone without using any profane term."

Instead, BullStop works by using artificial intelligence to proactively and contextually detect cyberbullying language or word-use in the incoming and outgoing messages of a user. Before hitting the play store, Semiu says BullStop was trained on 60,000 tweets to help it learn which words, tones, and expressions should be classified as harassment. For example, BullStop would be better than its key-word based competitors at detecting the similarities between "fu*k" and "fu*******k," Salawu tells Inverse.

Once granted access to a user's Twitter account, BullStop works by evaluating incoming and outgoing messages for these tell-tale signs of online harassment and will delete them for the user. And, because everybody's experience of the internet is different, the user can guide the A.I. to better understand their preferences by highlighting messages that either should've been flagged or ones that were unnecessarily flagged. In that way, BullStop can be incredibly individualized, says Salawu.

Beyond bullying, in the future BullStop could be trained to identify other forms of harassment, like sexism or racism.

Shutterstock

What's next -- BullStop is currently only available for use on Twitter, but Salawu tells Inverse that they plan to expand the app to work with Instagram, Facebook, and text messages as well. But, when it comes to video-based platforms like YouTube and TikTok, Salawu tells Inverse that it's a little more challenging.

"The way people consume YouTube and TikTok is different," says Salawu. "People don’t have [or rarely use] an inbox where videos can be sent, rather a bullying video will be published publicly by the bully via their own channel which they own so there is no way for the app to get notified unless you are already monitoring the bully’s channel. And then once you have analyzed and determined that it is bullying, the best you can do is report it by which time it may have already been seen by millions."

While BullStop is primarily aimed at teens thirteen and older, Salawu says that the approach behind BullStop could also be used in future workplaces as well. He imagines this being a web-based plug-in, similar to something like Grammar.ly, to help work colleagues learn to better account for their tone in company-wide emails, including detecting language used in different forms of harassment like sexism.

"We also want to make the [A.I.] available as an API," says Salawu, meaning the tech behind the A.I. could be used and tweaked by other groups and purposes. "For example, you can have an Outlook plug-in that allows people while they're writing an email at work... the [A.I] can scan the messages that you're typing to say 'you might want to change the tone of this message'."

Related Tags