Three years after it stopped accepting applications for accounts to be verified and bestowed with the coveted blue checkmark, Twitter has released a draft of its new verification policy due to come into effect next year, including proposed criteria for accounts to be granted the sought-after status. The company put the breaks on the program in 2017 following complaints that it was verifying accounts of alt-right and other fringe figures and giving their views undue legitimacy in the process.
Not an endorsement — Verification serves as a symbol of trust — people ascribe a certain level of authority to accounts with the checkmark because the identities of these users have been authenticated by Twitter. It's really meant to guarantee you're talking to the real Obama or Sean Hannity (or Yogi Bear) rather than an impersonator, but critics complained that the vague standard for verification led people to misconstrue the badge as Twitter endorsing a particular figure's views as worthy of listening to.
Twitter never actually stopped verifying accounts. Its process became even more opaque when it pulled down the public application option, as it continued to verify tens of thousands of accounts and nobody quite knew how it was choosing them. But the company always promised that it would bring back a more formal program in the future with specific criteria for eligibility. It hopes the new process will resolve confusion around what it means to be verified, clarifying that anyone who is considered notable can receive the badge so long as they're not spreading harmful commentary.
Transparent rules — Under the new program, there are six types of "Notable Accounts" that Twitter initially intends to verify:
- Companies, brands, and non-profit organizations
- Activists, organizers, and other influential individuals
Those categories are pretty broad in and of themselves, but Twitter breaks down specific criteria that apply to each of the account types. For instance, a lot of alt-right users could fall under the last category: activists, organizers, and other influential individuals. But Twitter says to be eligible, accounts must abide by its rules, and cannot promote "the supremacy or interests of members of any group in a manner likely to be perceived as demeaning."
It also adds they must have "off Twitter notability," meaning something like a Wikipedia page or Google Trends activity showing that people frequently search for their name.
Twitter intends to simultaneously introduce a process for automatically removing verification from accounts that consistently violate its rules, such as those promoting hate speech and endorsements of violence, though it hasn't specific more details on that.
Besides meeting the criteria for a listed category, to be verified users must be active and not have had a 12-hour or seven-day suspension for rule violations in the past six months (excluding successful appeals).
Defining "public interest" — Previously when Twitter was criticized for verifying the accounts of more seedy characters, the company simply pointed to its policy of awarding the blue checkmark badge to accounts "of public interest." With these new rules, Twitter is (rightfully) saying that the account of a white supremacist isn't in the public interest, even if they drive conversation or have lots of followers.
Twitter is trying to include its community in the process to develop these new rules, so it created a survey where anyone can provide thoughts on how they think the draft rules can be improved. The public feedback period starts today, November 24, and continues until December 8. Twitter hopes to review feedback and introduce a final policy on December 17.
Despite pushback from conservatives who falsely claim they're being censored, Twitter hasn't been afraid to police its platform for harmful content. At the same time, the company has asked Congress for more clarification on types of content should or shouldn't be prohibited, so that it's not forever taking shots from both sides of the aisle over its content moderation decisions.