Facebook Finally Gets Real About Fighting Fake News

The social network’s updates to address fake news are live now. And while they won’t solve the problem overnight, they’re an important first step.
DisputedStorycrop.jpg
Facebook

After coming under heavy public criticism for not taking full responsibility for how it may have affected the outcome of the 2016 presidential election, Facebook has finally laid out how it plans to crack down on fake news. The social network’s corrective updates are starting to roll out right now, and while they won’t solve the problem overnight, they’re an important first step.

“We believe in giving people a voice and that we cannot become arbiters of truth ourselves, so we're approaching this problem carefully,” Adam Mosseri, vice president of product management for News Feed, wrote in the blog post announcing these changes. “We've focused our efforts on the worst of the worst, on the clear hoaxes spread by spammers for their own gain, and on engaging both our community and third party organizations.”

Facebook's strategy combines crowdsourcing similar to how Facebook polices mature content, reliance on third-party fact checkers, and financial disincentives for fake news hucksters. Each aspect of the rollout has its strengths, but also invites a few questions.

A Flood of Fact Checking

First, Facebook says a new option will appear in News Feeds to let anyone mark a Facebook post as fake news. You can find it by clicking the upper right-hand corner of a post. A group of researchers---full-time Facebook employees---will assess if the web sites that are the source of stories flagged by the community come from a legitimate media organization or a site that masquerades as one. These researchers are not reviewing the content of the post itself, Facebook says---only whether the domain is portraying to be a news source, based on clear guidelines, including how the site describes itself. This applies to websites with clearly spoofed domains---say, a domain with a name like FoxNews123.au, which poses as legitimate. But if a story is categorized as from a news source, Facebook says, it will be eligible to be sent to third-party fact-checking organizations. "This is still early days for these tests, and we'll continue to learn and iterate," a Facebook spokeswoman wrote in an email to WIRED.

Facebook has already enlisted select third-party fact-checkers---Snopes, Factcheck.org, ABC News, the AP, and Politifact at launch, all members of Poynter’s International Fact Checking Network---to comb through disputed stories and share their findings with the Facebook community at large. There are no financial arrangements between Facebook and the fact-checkers, or arrangements that would otherwise benefit the groups, and Facebook says it will add more partners going forward.

According to Eugene Kiely, director of Factcheck.org, after Facebook alerts them of potentially fake news, the fact-checkers will send back a link to a story that debunks it, if applicable. Facebook will the append the questionable content with a notice that reads "Disputed by 3rd Party Fact-Checkers," with an option to read more about why that specific post was flagged. If users try to share the post anyway, they'll be met with an interstitial that again reminds them that third-party fact checkers dismissed it, and a further note that reads "Before you share this story, you might want to know that independent fact-checkers disputed its accuracy."

The group may also research and write new stories based on what’s trending on Facebook, Kiely says, “but only to the extent that we have the resources and it fits our mission.” Aaron Sharockman, executive director of Politifact, says his team will do the same. “We can review the posts that have been marked as suspicious and make news judgments about what we cover, and how many resources are required,” he says.

Previously flagged stories should keep them plenty busy. In fact, the sheer volume could potentially create a real headache, and potentially a logjam. Sharockman says that's just part of the job. “There has always been more things to fact check than people to do the fact checking. I don't expect that to change,” he says. “As a group, I think fact checkers realize the importance of debunking fake news and are prepared to help in any way as long as there remains a need.”

And the organizations Facebook has retained for assistance say this work aligns with their core mission. “What’s different now is the method and delivery system,” says Sharockman. For the most part, the groups say, it’s just fact-checking business as usual.

Eyeballs and Ad Dollars

Additionally, Facebook says that people are less likely to share fake stories after reading them and feeling misled, and will tweak its algorithm to reflect that in hopes of burying fake news deeper in the News Feed. While that premise initially seems sound, a recent BuzzFeed investigation found that top-performing fake election news stories on Facebook actually generated more engagement than the top stories from news outlets such as The New York Times, Washington Post, Huffington Post, and NBC News. It will be up to Facebook to prove, in time, that this signal does improve its algorithm, rather than work against it.

Finally, Facebook says, it’s working on new ways to choke the ad dollars flowing into spoofed domains---the FoxNews123s of the world. It’s also re-evaluating to see whether publishers on its platform violate Facebook’s Audience Network ad policies.

Last month, Facebook (along with Google) said it would punish fake news sites with stricter ad rules. Specifically, Facebook clarified existing rules that it won’t integrate or display ads in apps or sites that are illegal, misleading or deceptive---sites that promote regulated goods, pornography, and violence, for instance. With its move last month, Facebook explicitly added fake news to that list, and it says this update is a continuation of enforcing this policy. It hasn't yet, though, said how effective the move has been.

The company says it can't give away the details for how it will crack down on barring its ads from being served on fake news sites, asserting that bad actors could potentially game the system if Facebook laid out exact plans. But the larger issue is that enforcement can be difficult. For now, some have criticized Google for spotty enforcement of its own fake ads policy. Google, for its part, says it's important to move slowly on the effort to make sure the new policy would be implemented correctly---a defense Facebook could adopt, too. But in the meantime, fake news sites are still making money.

All these updates are a worthy step forward in addressing the scourge of fake news, which presents a real threat to our democracy even as the companies that spread this disinformation make a killing in ad dollars. Facebook has long asserted that as a platform, a mere middleman to these low-life companies, it doesn’t directly carry any responsibility for the actions of abusers of its reach. Today, Facebook is stepping up to the issues in earnest. Whether its actions will lead to the kinds of impactful changes that are currently needed remains to be seen.