Following the virality of manipulated or deep fake videos of Facebook (FB) CEO Mark Zuckerberg late last week and House Speaker Nancy Pelosi this May, Facebook is facing increased scrutiny over its related policies.
The social network needs to act quickly to address the problem. Congress, which convened a hearing on Thursday to discuss the problem of manipulated video and audio, is concerned examples like the Pelosi clip could go mainstream and play a role in influencing the 2020 U.S. presidential election. But beyond that, several experts who spoke with Yahoo Finance caution the sophistication of deep fakes — and other types of misinformation — will only increase if they’re not kept in check somehow by technological platforms like Facebook.
Late last week, artists Bill Posters and Daniel Howe uploaded to Instagram a deep fake video of Zuckerberg created with the help of technology from advertising company Canny. In the altered video, Facebook’s chief executive was digitally manipulated into discussing the power he wields, “with total control of billions of people's stolen data, all their secrets, their lives, their futures.” Meanwhile, the doctored video of Pelosi, which President Donald Trump tweeted in May, was edited so the House Speaker slurred her words and appeared impaired.
Both videos raised concerns over how Facebook handles the spread of misinformation, particularly deep fake videos, which are becoming more convincing. In the case of the Pelosi video, Facebook did not take it down but rather “deprioritized” it, or made it appear less often, on the social network — a decision Pelosi disagreed with, according to a Washington Post report on Tuesday. Meanwhile, the deep fake video of Zuckerberg remains on Instagram, but it too is now harder to find.
Removing immunity
Dr. Mary Anne Franks, a professor at the University of Miami School of Law, contends Facebook should somehow be held responsible for misinformation like deep fake videos published on the platform. However, the social network technically remains immune from liability because of Section 230 of the Communications Decency Act of 1996, which states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." For the time being, Section 230 essentially provides Facebook a sort of immunity from the consequences of misinformation like deep fake videos, explains Franks.
“As it is, there’s little incentive for companies like Facebook to really crack down on this kind of content beyond some sort of media backlash, like there is right now,” says Franks, who suggests that Section 230 be amended such that tech companies must somehow “earn” its immunity.
Facebook has come under heavy scrutiny since the 2016 U.S. presidential election, making headlines over revelations that Russia used the social network to meddle with the election. Lawmakers, media and the public have questioned whether Facebook should play a more aggressive role in policing content and the spread of misinformation.
Facebook, for its part, acknowledges the clock is ticking to take more action.
“Leading up to 2020 we know that combating misinformation is one of the most important things we can do,” a Facebook spokesperson told Yahoo Finance in a statement. “We continue to look at how we can improve our approach and the systems we've built. Part of that includes getting outside feedback from academics, experts and policymakers.”
Explicit labels
At the very least, the social network should consider explicitly labeling deep fake videos on the platform as such so Facebook users know what they’re watching from the get-go. As it currently stands, Facebook only notifies users if a particular deep fake video is cause for concern if they try to share it. Try sharing the doctored Pelosi video on Facebook, for instance, and you’re notified that there is “additional reporting,” with buttons you can click to read articles from organizations including Factcheck.org, Lead Stories, PolitiFact, Associated Press, and 20 Minutes.
“Labeling, in particular, is a middle-ground position that we have not fully tested,” explains Robert Chesney, associate dean for academic affairs at the University of Texas School of Law, who adds that if Facebook is given the power to ban such content outright it could be construed as censorship. “We should try labeling before we go around and just deleting content for people, de-platforming people or suppress their content so no one can find it, bearing in mind that it's a slippery slope that sometimes is going to involve political speech. … Maybe we’d be smart, if we proceed with baby steps, including encouraging the companies as an initial matter, simply to label more aggressively without actually silencing speech.”
The stakes are high, points out Susan Etlinger, an Altimeter Group analyst who specializes in AI ethics.
“We are now in a world where truth can so easily be manipulated,” she says. “So my challenge for Mark Zuckerberg and the CEOs of other social platforms would be to extrapolate out 3 years, 5 years, 50 years. Where will we be then? Granted, Facebook and Twitter can’t singlehandedly ensure societal stability. That’s completely unrealistic. But they don’t have to contribute to instability.”