BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Facebook Bans Deepfake Videos That Could Sway Voters, But Is It Enough?

Following
This article is more than 4 years old.

Let the misinformation campaigns begin.

As you may know, the proliferation of altered content, fake news, videos that are intended to sway voters to one ideology over another, and outright falsehoods and lies seriously impacted the outcome of the last presidential election — and the primary culprit was Facebook. (Although Twitter bots were also part of the problem.)

Millions and millions of people read posts about Hillary Clinton being the antichrist or watched videos of Donald Trump looking like a big orange pig.

Because social media is a free-for-all with a steady stream of posts about any topic under the sun, and because millions of people stick to the safe confines of Facebook, it was easy to feed misinformation on both sides of the political spectrum.

In short, it worked.

Recently, Facebook instituted a new policy that bans some deepfake videos. It’s a step in the right direction in the battle against fake news.

Deepfakes tend to work. Using artificial intelligence and freely available apps, including one that just debuted this week called Doublicat that creates deepfake GIFs using only a selfie, anyone can make it look like a celebrity is advocating for a border wall or trumpeting one of the Democratic candidates.

The catch (and to be honest there is always a catch on social media), this does not include all deepfake videos. In the official announcement, Facebook noted that some fake videos that are meant for satire are still fine, along with any that have a more serious purpose. Videos that were merely edited for quality or clarity will not be banned. What the announcement really addresses are the videos that are part of a misinformation campaign, what Facebook calls manipulated media. This would include videos that are more coercive in nature, such as those that make it appear as if a high-profile figure or a politician is saying something they did not say.

Deepfake videos have been around for a while, including an infamous one where Barack Obama speaks out about fake news.

Of course, deepfakes can be traced even further back to illicit videos, but they reached a tipping point in 2018 because of how easy they were to create. End users could download a few apps and load Photoshop-altered photos to create deepfakes in a few minutes. The artificial intelligence made it easy. Last year, FaceApp became popular because it could make you look older or younger with a few clicks.

In the upcoming election, social media will again play an even more important role. If it’s this easy to create deepfakes using an app, they will become even more popular.

Now for the bad news. It’s admirable that Facebook is taking a stance against misinformation campaigns and will block these videos, but there are always workarounds. It will be interesting to see if videos that initially look normal and unaltered suddenly insert altered portions and if Facebook will be able to detect these more subtle deepfake videos.

And, deepfakes are only a small part of the problem. Everyone has the right to share their opinion on social media, and Facebook can’t police every single post and comment to judge the accuracy of the statements. It’s extremely difficult to determine what is an oddball opinion you have the right to post and what is meant as a subversive falsehood that is meant to disrupt the election cycle.

Good luck with that kind of policing, and I’m not even sure if AI can help.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here