Future Tense

Twitter Wants to Use Wikipedia to Help Determine Who Gets a Blue Checkmark

Collage of Twitter checkmarks and Wikipedia logos.
Photo illustration by Slate. Images by Wikipedia and Twitter.

Welcome to Source Notes, a Future Tense column about the internet’s information ecosystem.

Back in November 2017, Twitter announced that it was pausing all account verifications, closing the public submissions process that gives certain accounts the blue checkmark next to their names. The move came after the social media company was heavily criticized for giving the verified checkmark to Jason Kessler, the organizer of that year’s white nationalist rally in Charlottesville, Virginia. In its statement at the time, Twitter said that although verification had simply been intended to authenticate identity, it was now being interpreted as any sort of endorsement. “We recognize that we have created this confusion and need to resolve it,” Twitter said.

That was three years ago. Since then, Twitter’s verification process has officially been put on hold, although the company has in fact verified political candidates, medical experts tweeting about COVID-19, and made thousands of other quiet exceptions. Twitter has fended off complaints about the long delay in accepting public verification requests by saying that it has instead been focusing on priorities like election integrity. Now that the election is over and Fleets have arrived, however, Twitter has signaled its willingness to open up an official path to those coveted blue checkmarks. The company announced Nov. 24 that it was relaunching its verification system in 2021 using a new policy that is open to public feedback until Dec. 8.  How has Twitter improved its verification system between now and 2017? As it turns out, a big part of verification 2.0 is relying on Wikipedia.

According to the proposed new policy, Twitter will verify six different types of accounts: government; companies, brands, and nonprofit organizations; news; entertainment; sports; and activists, organizers, and other influential individuals. Wikipedia factors heavily into the categories for companies, brands, and nonprofits and activists, organizers, and other influential individuals. Companies and brands must satisfy two of the three requirements of “presence in public indices” like “Wikipedia (including multiple references to unaffiliated external sources),” a recurring presence in qualifying news outlets, or follower count in the top 0.1 percent of active accounts located in the same country.   Influential individuals who wish to become verified must satisfy similar requirements for “Twitter Activity” plus prove “Off Twitter Notability,” which can be established by criteria such as having a profile on Google Trends based on search activity, featured references in qualifying news outlets, or having a Wikipedia page about them. Accounts that are notable and active will be verified provided that they check off enough of the requirements, although Twitter reserves the right to revoke or deny the blue check in its discretion.

When I asked Wikipedia editors how they felt about Twitter’s increased reliance on their project to confirm importance, they responded with a mix of pride and wariness. “This move exemplifies the institutionalization of Wikipedia’s definition of a ‘user’ and points to some ideological alignments between Wikipedia and Twitter,” Monika Sengul-Jones, a communication and technology studies scholar and active Wikipedian, wrote in an email.
Instead of conferring authority based on other factors, like reviews by professional organizations, Sengul-Jones explained, Twitter is relying on Wikipedia’s active users—the volunteer editors—to make their decisions. That gives both the Wikipedia editors, and their processes, a kind of authority, too.

It’s also striking that both websites will use the same word: notability. Wikipedia has had its own notability guideline since 2006. Editors use the notability standard to determine whether a given subject warrants its own article by reviewing if it has received significant coverage in “reliable sources that are independent of the subject.” Twitter’s new policy seems to paraphrase this idea, stating that an account may be verified if the subject has a Wikipedia page about them with three “external references to distinct, unaffiliated sources.” (Now that Twitter has caught onto Wikipedia’s notability guideline, perhaps somebody should teach them [Citation needed].) 

On the one hand, it’s quite the coup that a volunteer site like Wikipedia could develop a policy that would be useful to Silicon Valley giant Twitter. Then again, there has been sustained criticism of Wikipedia’s notability guideline for continuing a gender gap on Wikipedia (at present less than 19 percent of the biographical pages on Wikipedia are about women) and otherwise contributing to systemic bias on the encyclopedia. “Will Twitter’s policy amplify this? It’s possible,” wrote Sengul-Jones, pointing to her research on how Wikipedia’s reliable source guideline and the related notability standard can leave out marginalized communities.

Of course, Twitter isn’t the first major social network to rely on Wikipedia. Back in 2018, Facebook and YouTube announced that they would help users detect fake news by suggesting links to the relevant Wikipedia articles. Today if you search for videos about the flat earth theory, YouTube will provide a link displayed below the video to the Flat Earth Wikipedia page, which—ahem—flatly describes that theory as pseudoscience.

But Wikipedia editors see a distinction between how those companies are using Wikipedia to fight misinformation and what Twitter has proposed. “In the fact-checking example of Facebook and YouTube, no individual stands to benefit greatly,” Kevin Li, a Wikipedia administrator and junior at Stanford, told me in an email. There is a widely distributed public benefit when YouTube provides a link to Wikipedia for fact-checking. By contrast, Li argues that Twitter’s use of Wikipedia presents “concentrated benefits to the person who wants the blue check.” This type of personalized reward is not necessarily in keeping with the public-minded mission of Wikipedia.

There is an economic dimension to this, too. Companies like Twitter, Facebook, and YouTube save a lot of money by relying on Wikipedia. By enlisting volunteer Wikipedia editors to decide on issues of notability or fake news, tech companies are keeping down their labor costs. “It’s a form of Twitter offloading its work to us and expecting us to deal with it,” wrote Li.

Other editors told me they are concerned that people will attempt to use Wikipedia as a backdoor to their goal of becoming Twitter verified. Having a Wikipedia page created on your behalf certainly seems like an easier task than trending in Google searches, receiving mainstream press coverage, or any of Twitter’s other proposed verification criteria. Already, hundreds of disreputable vendors on platforms like Upwork offer to secretly create a Wikipedia page for someone, sometimes for less than $100, even though this violates the website’s terms of use. These vanity pages are often quickly deleted by vigilant editors because the subject fails the notability standard. Nevertheless, this can create a lot of additional work for the volunteers. Meanwhile, Sengul-Jones told me that there is evidence that the active number of Wikipedia editors has been in decline. Twitter’s new rules could indirectly contribute to an ever-increasing workload for the project’s limited number of volunteers.

It seems likely that Twitter’s new policy will be adopted, meaning that Wikipedia will begin to play a much larger role in determining whether an individual or an organization receives the blue check. What’s less clear is whether Twitter’s reliance on Wikipedia will help it solve the same problems that the social network had back in 2017. Recall that Twitter’s original controversy was that Kessler, the white nationalist, had received the blue check. But Kessler has a Wikipedia page, and that’s an example of Wikipedia fulfilling its function. His Wikipedia page describes him as anti-Semitic and a neo-Nazi because the article summarizes what the reliable third-party sources have published about him. In other words, racists and even dictators can have Wikipedia pages because screening for morality is not the encyclopedia’s job. That means, however, that Twitter would not simply be able to rely on Wikipedia alone to keep bad actors from being verified on its platform. To me, though, it’s satisfying that even under its new policy, Twitter must do some work for itself.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.