Policy Change

Twitch answers 6 urgent questions about its off-service misconduct policy

Twitch's new misconduct policy opens up some questions.

Updated: 
Originally Published: 
Shutterstock

Twitch is cracking down on bad behavior. Again.

The streaming platform began enforcing new policies on hateful conduct and harassment back in January. They were designed to police users engaging in racist behavior, sexual harassment, and more. Despite the tighter rules, the policy only extended to what was happening on Twitch’s own platform.

On April 7, Twitch rolled out an even stricter policy that encompasses off-site misconduct. The new process utilizes a third-party investigator that will open cases against Twitch users who have been reported for harmful or inappropriate behavior. It’s an in-depth plan to deal with complex cases that seep beyond the popular streaming platform, though the company’s announcement left some unanswered questions about how it works.

Twitch tells Inverse its new rules and process have already gone into effect — this isn’t something that will be gradually phased in over a matter of weeks or months. As part of the new policy, the site opened a new email address (OSIT@twitch.tv) where anyone can report an instance of a Twitch user engaging in off-service misconduct. Those cases are sent to an external investigator who Twitch says has deep expertise in independent “workplace and campus conduct issues.”

A streamer uses Twitch Studio’s overlay.

Twitch

Users can report behavior that poses a “substantial safety risk to the Twitch community.” The list of potential violations covers users who are associated with extremism, hate groups, or sexual exploitation such as grooming. If a Twitch user engages in inappropriate, hateful, or abusive behavior elsewhere — say on another social media platform or online forum — they can be reported and a case may be opened against them.

While the new rules outline an explicit process, they don’t provide a timeline for case review, specify whether old reports can be reopened, or if general off-platform hate speech counts as a violation.

Inverse spoke to a Twitch spokesperson who provided clarification on the finer details of the policy.

How long do you anticipate it will take for cases to go through a full review and result in action?

Our team will strive to respond to any inquiries to the OSIT@ alias as quickly as possible. We have expanded our internal Law Enforcement Response team and brought on our third-party law firm to ensure our teams are fully equipped to manage investigations as efficiently and thoroughly as possible.

Is this a three-strike policy?

Enforcements will be based on the severity of the infraction once it is verified and our investigation is complete. We will review every report that comes in through the OSIT@ alias, but will only take action when we have verified evidence of a serious offense.

Will users be able to reopen old cases here, or will any be picked up by the team retroactively?

We encourage all users to submit relevant reports of off-service misconduct that falls into the categories listed in our policy to report them to our OSIT@ alias. This is a separate reporting process from our standard on-service reporting flow and is processed by a separate team.

Congresswoman Alexandria Ocasio-Cortez played Among Us on Twitch in October 2020 to urge Americans to vote.

Are you able to share more specific information on the third-party investigation team?

Due to privacy concerns, we will not name our third-party investigations law firm.

Was there a specific incident that triggered this policy?

This policy was not triggered by one specific incident but was a long-term effort that was informed by a range of cases and incidents. In particular, our safety operations and law enforcement response teams worked diligently to process the allegations of sexual misconduct that surfaced across the gaming industry over the summer. Through the course of that work, we realized that our current policy regarding off-service misconduct was not clear enough, so our policy team began reviewing our guidelines and internal protocol.

At the same time, we decided to bring on an experienced third-party investigations law firm in order to support our internal law enforcement response team. We have put a great deal of thought into this program and feel confident this will enable us to take thoughtful and responsible action in these types of cases.

Will you be investigating off-platform hate speech that's not directly affiliated with a group?

At present, we will only enforce against the behaviors below under the new policy. We have prioritized these offenses because they pose an immediate physical safety risk to our community, but we recognize that there are many behaviors that are not covered under this initial list that may be prevalent online. Our goal is to protect our community, and we are committed to learning from this policy and our investigations before we consider any potential expansion in scope down the road.

The full list of behaviors currently included in the policy includes:

  • Deadly violence and violent extremism
  • Terrorist activities or recruiting
  • Explicit and/or credible threats of mass violence (i.e., threats against a group of people, event, or location where people would gather).
  • Leadership or membership in a known hate group
  • Carrying out or deliberately acting as an accomplice to non-consensual sexual activities and/or sexual assault
  • Sexual exploitation of children, such as child grooming and solicitation/distribution of underage sexual materials
  • Actions that would directly and explicitly compromise the physical safety of the Twitch community
  • Explicit and/or credible threats against Twitch, including Twitch staff

That last bit could be a sticking point for Twitch’s community. Focusing explicitly on violent threats and affiliations with specific groups opens up a significant gap in the protections. The policy will deal with users who pose an immediate risk, but not necessarily the kind of behavior that leads to that violence. It tackles a set of specific problems, though not necessarily the symptoms.

Twitch’s mod view, which lets users filter out language.

Twitch

It’s also still unclear what constitutes being a member of a “known hate group.” The language regarding violent extremism and terrorist activities recalls the January 6 attack on the United States Capitol, though many who took part in the insurrection attempt weren’t formally affiliated with a group. They more so fell under the broad umbrella of the “alt-right” label.

That’s where the lines could begin to blur. If a Twitch user parrots the ideology of white nationalist groups but isn’t part of a specific group, can they be investigated? Will Twitch lump protest movements like Antifa, which some Republican lawmakers have tried to misleadingly label a hate group, in with actual organizations? We’ll only know what exactly is covered once Twitch starts resolving cases.

Despite the uncertainty, the policy is a crucial step in keeping Twitch’s community safe. The net might not be wide enough to deal with every situation, but it should catch some of the platform’s most immediate dangers, and make the platform safer and more welcoming to all users.

This article was originally published on

Related Tags