Facebook (FB) has practically courted controversy during its 13-year history. But it has never had to battle allegations that it abetted a massive Russian propaganda campaign aimed at disrupting American democracy — until now.
The social media giant has tangled with politicians and regulators before — over privacy issues, questions of political bias, antitrust concerns and other matters. It has generally prevailed, and kept government regulators at a distance. But Facebook and its smaller cousin Twitter (TWTR) are now embroiled in a high-stakes political controversy, with national-security implications, that could dwarf any regulatory challenge faced before. Some critics liken it to the weaponization of social media.
“If Facebook and Twitter are now seen as agents of a foreign actor, that becomes a rationale for significant regulatory intervention,” says Simon Rosenberg, president of NDN, a center-left think tank in Washington, DC. “This could be a game-changer for both.”
Facebook recently provided special counsel Robert Mueller, who’s investigating Russia’s role in the 2016 U.S. presidential election, data on ads purchased by Russian interests during last year’s campaign — which might violate U.S. election law. Facebook says total spending on the campaign was no more than $150,000, for perhaps 5,200 of the small ads that show up in users’ news feeds. That’s roughly .0005% of Facebook’s $26.7 billion in revenue last year. If that’s the extent of it, maybe it’s no big deal.
How much did Facebook know?
But that’s probably not the extent of it. Facebook has indicated there could be more questionable ads it doesn’t know about, and if that’s the case, Mueller, with subpoena power, will likely find out. Some members of Congress think Facebook hasn’t disclosed all it knows, and they’re planning to call Facebook executives to testify under oath. And a group of Democratic members of Congress have called on the Federal Election Commission to issue new guidance on how to prevent political activity by foreign interests on American social-media sites. That could be a prelude to new regulation.
Technology experts, meanwhile, wonder how Facebook — which markets itself as a master of targeted data, with an algorithm for everything — could fail to know Russian interests were using the company’s platform to roil a U.S. presidential election. There’s also a growing body of evidence that Russian agents created millions of fake Twitter and Facebook accounts to promote Donald Trump’s candidacy for president last year, which isn’t necessarily illegal but risks tarnishing the brands and the trust users put in them.
“I believe that what we know now will be the tip of the iceberg,” says Tim Chambers, who runs the digital arm of the Dewey Square Group and authored a recent paper on the malicious use of social media in American politics. “We’ve had spam, false accounts and click-fraud before, but this pulls all of them together and adds the idea of intentional mass propaganda. It’s a whole new area of seriousness.”
Facebook and Twitter generally suspend fake accounts and suspicious activity once they’ve been identified. Both companies say they’re developing new technology and other methods to better police political activity on their platforms. Neither responded directly to questions from Yahoo Finance for this story.
Yet the rising prominence of social media, long a boon for both companies, has now created a new vulnerability for them — and the United States as a whole. “This isn’t just a question of commerce,” says Meredith McGehee, chief of policy for Issue One, a nonprofit that monitors government ethics. “This is a political question and that always makes people move. They are probably going to face great pressure to change their business model.”
The exploitation of social media by foreign interests for political purposes — which researchers at Oxford University call “computational propaganda” — is basically a digital twist on established forms of influence operations such as broadcasting political messages into a target country, or even dropping leaflets from airplanes. The goal isn’t just to influence elections, but to sow discord and confusion and weaken trust in national institutions, such as the electoral system, the government and the press. While American investigators are focusing on Russia’s attempts to do this in the United States, American intelligence agencies have operated in similar fashion in places hostile to U.S. interests.
The rise of bots
What’s relatively new is the brazen use of Facebook, Twitter and other social-media networks for these purposes. Ads are just a small part of the toolkit. Another factor are bots — fake accounts meant to impersonate real people that spew political messages and generally exist to stir up trouble. Twitter has estimated that about 8% of all accounts on its network are fake; outside researchers think it could be nearly twice as many. Since Twitter has about 300 million users, that would amount to roughly 45 million fake accounts. They’re not all nefarious and they’re not all Russian, but most exist for some kind of illicit purpose.
Facebook’s numbers are harder to estimate because the company is cagier about its user data than Twitter. In its latest quarterly filing with the SEC, Facebook estimated that 1.5% of its 2.01 billion monthly users are “undesirable” accounts, including fake ones. That would total 30 million undesirable accounts. Facebook also said, however, that its estimate “may not accurately represent the actual number of such accounts” because it’s based on a “limited sample.” A recent New York Times investigation identified “hundreds or thousands” of fake Facebook accounts specifically linked to Russian sources that generally published information last year supporting Donald Trump — regarded as the Kremlin’s favored candidate, because of his dovish views toward Russia — and deriding Hillary Clinton.
These fake bot accounts aren’t literally robots that operate with no human intervention. Software, rather, helps human operators run networks of bots, or botnets, with some automated features and some human input. Some researchers prefer the term “cyborg,” to connote a human augmented by technology.
A human controller — operating, perhaps, from a reputed bot farm in St. Petersburg, Russia — might oversee 50 or 100 bots. “I would envision a control room where the human operator has a screen telling what activity is going on with each bot,” says V.S. Subrahmanian, a cybersecurity professor at Dartmouth University. “The software automatically suggests text, pictures, emojis, and links that should be in the posts.” For bots targeting an American audience, the operators would almost certainly speak English.
Well-run bot accounts appear to be human because they steal personal details from the online accounts of real people, and scrape appealing photos from anywhere on the web. Gaining friends and followers is key to appearing like a legitimate account. Software can identify real social-media users inclined toward whatever message the bot is designed to propagate, and connect with them using the various tools Facebook and Twitter offer. As the bot posts begin to come, dialogue between the bot and real users ensues, the bot begins to get retweeted or reposted, and social media begins to amplify whatever message the bot is sending.
Twitter audit, a free online tool, estimates that 43% of President Trump’s 38.6 million Twitter followers are fake accounts. Trump himself has retweeted posts by bots, including one in August that complained about “fake news” — exactly the kind of catchphrase bot software might highlight for inclusion in a post. “Influence bots are trying to surreptitiously influence how Americans think and vote,” says Subrahmanian. “This is a new kind of trap, one unleashed more or less for the first time in the last election.”
Democracy makes the U.S. vulnerable
There’s a peculiar irony to the weaponization of social media: the United States can’t do to Russia, or most other adversaries, what Russia seems to be doing to us. That’s because Russia, China and most nondemocratic nations tightly control the Internet and other forms of media, and some ban Facebook and Twitter completely. So Russia, in effect, is exploiting American companies that thrive on the openness of information in the United States — to harm the United States. “Facebook and Twitter are not serving their users well,” says Chambers, “and they’re not serving the country they were founded in particularly well.”
This situation isn’t entirely new. McGehee of Issue One likens the current wild-west atmosphere on social media to the early days of radio, when anybody could broadcast, fake news was rife on the airwaves and bogus information occasionally created panics. That led to government control of the spectrum and the issuing of licenses, which required broadcasters to follow certain rules. Television followed the same model when it came along. Social-media advocates howl that such regulation on the internet would stifle free speech. But that argument now faces a powerful counterargument. “Every government understands the power of disinformation and propaganda,” McGehee says. “The challenge is finding the right public policy without trying to restrict speech in any way.”
Self-regulation?
As the issue heats up in Washington, Facebook and Twitter will almost certainly argue that they should be able to self-regulate when it comes to computational propaganda, as they already do on other issues by prohibiting hate speech and other types of offensive content. The big question is whether the social-media giants can be trusted. Twitter, for instance, could lose a significant number of accounts, at a time when analysts have been hammering the company for lack of growth and the stock has been struggling. So it may actually have a financial incentive to tolerate fake accounts. Facebook is a different story, with stratospheric profits and plenty of financial headroom. Yet that raises the basic question of why Facebook didn’t catch the Russians sooner, and why it takes news coverage or other outside inquiries to unmask fake accounts Facebook itself didn’t flag. The answers coming over the next few months will tell us a lot about the future of social media.