This article was first featured in Yahoo Finance Tech, a weekly newsletter highlighting our original content on the industry. Get it sent directly to your inbox every Wednesday by 4 p.m. ET. Subscribe
Section 230 is in trouble, but dismantling it might not be the answer
Since a mob of Trump supporters attacked the Capitol on Jan. 6, America’s lawmakers have doubled down on their efforts to halt the spread of conspiracy theories and hate speech online that led to real-world terror and death in Washington, DC.
And much of that conversation centers on reforming or doing away with Section 230 of the Communications Decency Act, which serves as a liability shield for online companies that host third-party content. Think anything from Facebook (FB) to your favorite message board.
A target of both Republicans and Democrats, Section 230 has been in the crosshairs of lawmakers for years — though there’s disagreement over what’s wrong with the law. While some Republicans argue that Section 230 lets Big Tech silence conservative voices, others on the opposite side of the aisle contend it allows sites to host misinformation and hate speech without fear of litigation.
And that fundamental disagreement is probably the biggest problem with the law, according to Jeff Kosseff, author of “The Twenty Six Words That Created the Internet.”
“Nobody has agreed on what the problem is that they want to solve,” says Kosseff, assistant professor at The United States Naval Academy’s Cyber Science Department. “It’s just basic life skills that you need to figure out what the problem is before you have a solution. And I don’t know if we’ll ever have that, because there are people with vastly different visions of what the internet should look like.”
In my column last week, I spoke to critics of the law who convinced me that the Capitol Hill riots could be the end of Section 230, at least as we know it. Since then, I heard from others who say I overlooked some of the law’s nuances. They contended that making changes to Section 230 could have a far greater impact on up-and-coming tech firms that rely on law’s liability protections.
Instead of altering Section 230, some argue, it’s up to us, the users, to demand changes from tech companies. We can do this by abandoning social media sites that allow hate speech to flourish in favor of services that better appeal to our sensibilities.
At some point, though, Congress will have to take concrete steps to ensure tech companies provide more transparency about how they moderate their services. But Congress should take this step without destroying Section 230, the internet’s foundational law.
What are we really mad about?
Section 230, a law passed in 1996, provides a liability shield for internet companies that make “good faith” attempts to moderate the third-party content they host. But that doesn’t mean Section 230 is the reason companies are legally allowed to host objectionable content on their platforms.
According to attorney Cathy Gellis, a Section 230 expert, the First Amendment is what allows companies to host such content without fear of liability — though the individual who posts the content doesn’t enjoy the same protections for content that might violate the law.
“What Section 230 does is it makes the First Amendment rights of the platforms meaningful, because it means that it’s not just liability that they are being protected from. The articulations of Section 230 insulate them from legal process, even the attempt to hold them liable,” Gellis explained.
In other words, she said, Section 230 protects companies from having to shell out boatloads of cash to defend themselves against legal proceedings that could be thrown out on First Amendment grounds anyway.
And that works for companies both big and small.
Revoking Section 230 could mean the end of the internet as we know it
Republicans and Democrats have both targeted Section 230 for different reasons. Some Republicans claim that it allows sites like Facebook, Twitter (TWTR), and Google (GOOG, GOOGL) to promote anti-conservative bias, something that has yet to be proven outside of anecdotes from lawmakers and conservative personalities.
Democrats, meanwhile, say the law allows tech companies to profit from the spread of disinformation and hate speech since they don’t face legal repercussions from hosting it.
Trump served as one of the biggest threats to Section 230 in recent memory. Last year, the Trump administration asked the Federal Communications Commission (FCC) to narrow its interpretation of the law to weaken its protections. While Trump wanted the FCC to curb the instances where internet companies were protected from liability, the agency never ended up acting on the administration’s request.
Interestingly, Trump’s rival, President Joe Biden, has also slammed the law. He told The New York Times last year that he would like Section 230 killed because it allows misinformation and disinformation to spread across Facebook.
“It should be revoked because it is not merely an internet company. It is propagating falsehoods they know to be false, and we should be setting standards not unlike the Europeans are doing relative to privacy,” Biden told The Times.
But according to The Brookings Institute’s Techtank blog, the Biden administration is unlikely to pursue changes to the law via the FCC, as a Biden-led commission is likely to leave any move up to Congress.
So what happens if Section 230 is completely removed from the picture? The internet as we know it would disappear. Without liability protections, companies like Facebook and Google would be unlikely to host user-generated content, as doing so would leave them open to a litany of lawsuits.
And if companies with market caps the size of small countries are afraid to host user content, imagine that would mean for the up and coming firms that look to rival today’s tech giants.
Pressuring companies to make changes might be the best way forward
While Democrats now have control over Congress, their majority is slim and both sides of the political divide want different things from a Section 230 overhaul. If we can’t fix Section 230, how do we make internet giants accountable for what appears on their sites?
We need to hold tech companies accountable by shutting down our accounts when we don’t agree with a site’s policies — or by publicly calling out Big Tech when it fails us. We’ve already seen tech giants begin to remove harmful content following a public outcry.
After the Capitol attack, Facebook, Twitter, and Google all took action against Trump, removing a video in which he continued to peddle lies about the 2020 election. Facebook then struck Trump with an indefinite ban, while Twitter severed his ties with the site entirely.
“We’ve seen that marketplace mechanism...about the services stepping up to protect their users interests, and I really think that Twitter pulling the plug on a sitting president was a good example of that,” Eric Goldman, associate dean for research and professor at Santa Clara University School of Law, told Yahoo Finance. “It was something that Twitter knew would have massive repercussions for their business but they felt like they had no other choice.”
To be sure, it’s not easy for consumers to leave the likes of say, Google, or Facebook, when there aren’t comparable alternatives available. Ever try to Bing something? Or, have you ever suspended your Facebook account only to come back when you were deeply curious about where your high school buddy went on vacation?
Still, it’s worth remembering that those tech giants haven’t been around forever. And they are far from guaranteed to hold on to their positions.
Heck, at one point MySpace was an unstoppable juggernaut and Yahoo seemed like the only search engine worth your time. Consumers drive the market, and if they want to see change from the websites they use the most, they should use different services. These options include DuckDuckGo, a search engine that doesn’t collect user data. As for social media options, a host of competing services are popping up each day. Look no further than Snap and TickTock, which have risen up to compete directly with Facebook.
Pressuring the advertisers that do business with those firms can also help push them in one direction or the other. We’ve seen similar movements in the past, with companies pulling their ads from Facebook and YouTube for hosting objectionable content. And over the summer, the Stop Hate For Profit campaign sought to ensure advertisers took a stand against hate speech on social platforms they advertise with including Facebook.
And while Trump was still able to use the platforms after that, advertisers forced these sites to be more accountable for what the president was posting. This kind of pressure doesn’t bring the swift change that comes with a new law, but it can force Big Tech to be more accountable while Congress figures out what problem it’s trying to solve.