The decision by a Facebook Inc. panel to extend for up to six months former President Donald Trump’s banishment from the social media platform has renewed calls to revoke the legal shield that enabled Facebook to grow into one of the richest and most powerful companies in the world.
Minutes after the announcement, it was clear that the Facebook ruling hadn’t pleased liberals or conservatives. House Minority Leader Kevin McCarthy tweeted that Republicans would move to “rein in big tech power over our speech” if the GOP takes control of the House after the 2022 midterm elections.
“There is no backend accountability for Facebook. There’s no fine,” said Rashad Robinson, president of Color of Change, a civil rights group. “We have to end the immunity that these platforms have.”
Legal experts and academics say that curtailing the protection known as Section 230 could result in years of litigation and bedlam for the social media industry.
Tech companies fear lawsuits will explode, operating costs will soar and free speech will suffer if they lose their legal immunity. While the long-term effect on market shares is far from certain, in the short term Facebook, Alphabet Inc.’s Google and Twitter Inc. could become even more powerful as smaller networks fold because they can’t absorb the higher costs.
For some social media users, eliminating the shield might seem like a tonic: Tech platforms would finally have to answer for their actions in court.
But the prospect of large judgment awards could lead networks to clamp down hard on users’ posts, whether those are election falsehoods or #MeToo-style allegations of sex harassment.
The upshot: The free-flowing content that has led to the creation of new business models, transformed personal relationships and powered social movements could disappear, along with Section 230.
“Section 230 has become this outsized influence on tech policy, said Mary Anne Franks, a professor at the University of Miami School of Law. “It’s undeniable that if it were to be repealed or significantly changed, what would happen is a major disruption to the way that platforms consider their risks and their resources.”
Congress gave internet companies Section 230, part of the 1996 Communications Decency Act, as a quid pro quo. In exchange for the freedom to referee content, they aren’t legally liable for whatever they leave up or take down.
It’s not hard to imagine who would sue Silicon Valley’s biggest names if they thought they had a shot at winning.
Victims of revenge porn, sex harassment, gun violence and privacy breaches could seek restitution. So could restaurant owners looking to stop rivals from posting fake reviews, conservatives claiming social media is censoring them and mothers complaining that their children are being bullied online.
The assaults on Section 230 are coming from the highest levels and from across the political spectrum.
As a presidential candidate, Joe Biden echoed the views of many Democrats when he said he favored repealing the clause because social networks weren’t doing enough to remove hate speech, conspiracy theories and falsehoods.
As president, Trump unsuccessfully tried to revoke it for an altogether different reason: He and other Republicans think the tech companies use the legal shield to remove right-leaning content.
Sundar Pichai, chief executive officer of Alphabet, which owns Google and its YouTube unit, painted his nightmare scenario for lawmakers in a March 25 House hearing. If the clause were revoked, he said, tech companies would have no choice but to follow the law that existed before Section 230.
“Platforms would either over-filter content or not be able to filter content at all,” said Pichai.
He was referring to court opinions from the early 1990s that put internet companies in a bind. If they moderated what some users posted, they would be legally responsible for everything users posted, opening the door to lawsuits. Yet if they took a hands-off approach, they wouldn’t be held liable. So Congress passed Section 230 to protect internet companies if they acted responsibly and removed problematic posts.
Facebook has been running a public-relations campaign to pressure Congress to impose more regulation on social media rather than end legal protections altogether. CEO Mark Zuckerberg wants lawmakers to condition the legal shield on large platforms having systems to identify and remove unlawful content, with third parties determining whether the program is adequate. That closely resembles the oversight board process Facebook just used to review its Trump ban. Twitter CEO Jack Dorsey and Pichai expressed openness to the idea at the March House hearing.
But with the Trump decision fresh on their minds, lawmakers aren’t likely to find that satisfactory. Facebook removes numerous posts that violate its rules, but that hasn’t stopped objectionable content from proliferating across its platform, or soothed conservatives who think the owners of social media are biased against them.
“Your abuses of your privilege are far too numerous to be explained away and far too serious to ignore,” Rep. Jeff Duncan, a South Carolina Republican, told the tech CEOs in the March hearing. “So it’s time for your liability shield to be removed.”
Spokespeople from Facebook and Google declined to comment. Twitter didn’t respond to a request for comment.
WEAKEN THE SHIELD
Lawmakers have proposed a variety of measures to weaken the legal shield, ranging from forcing tech companies to treat political content neutrally to eliminating hate speech and terrorism, stopping harassment and cyber-stalking, and preventing the sale of counterfeit goods. But Congress is far from agreeing on what it wants the tech companies to do.
Lawmakers must tread carefully: The First Amendment prohibits the government from regulating speech, such as by forcing a tech company to leave up or take down certain categories of posts.
Simply revoking Section 230 would toss the action into the courts. Judges would have to reinterpret old court rulings meant to address the responsibility that bookstores and newsstands have for what they sell and apply them to social media.
Even then, defining the new legal responsibilities for tech companies won’t be easy. People are fooling themselves when they say a few years of litigation would clarify the law on platform liability, said Daphne Keller, who directs Stanford University’s Program on Platform Regulation. “Then there are people like me who are like, ‘are you kidding? The number of different things there are to litigate is infinite.’”
Social networks would have to defend themselves against lawsuits that courts now dismiss because of Section 230. An analysis by the Internet Association of more than 500 court decisions involving the clause over two decades found that 43 percent involved allegations of defamation.
In the next most common claim, involving about 10% of the lawsuits, users argued their First Amendment rights or other legal protections were violated when companies removed or limited content.
The cost to fight a single lawsuit could total hundreds of thousands of dollars without the legal shield, according to Engine, an advocacy organization that has received funding from Google and represents startups.
“Without Section 230, you don’t get to assert an affirmative defense that early on,” said Engine Executive Director Kate Tummarello. Instead, a tech company might have to turn over everything “you’ve ever shared internally as a company on content moderation” to comply with the discovery process.
Some legal experts believe the possibility of costly damage awards would drive tech companies to tighten their content moderation practices and more strictly enforce terms of service.
Maybe that’s not such a bad outcome, said Franks, the law professor. “Industries ought to worry a little bit about whether or not they’re getting sued,” she said.
But if the platforms are held liable for everything they miss, and that liability overwhelms the value of their business, “the only answer is to opt out of the game altogether and shut down,” said Eric Goldman, a professor at Santa Clara University School of Law.
The fallout could be uneven. Large tech companies can absorb the costs of heightened legal exposure. Yet smaller platforms with fewer resources and greater dependence on user-generated content might buckle. Websites such as Yelp Inc. and Ripoff Report, a website that tracks complaints about businesses, might be forced to take down more content to sidestep lawsuits.
“Facebook and Twitter would figure out a way to survive,” Ripoff Report founder Ed Magedson said in a statement. “Smaller platforms like ours would be crippled.”
Yelp didn’t respond to a request for comment.
Tech companies might ban entire categories of content. For instance, to avoid defamation lawsuits, a platform might bar users from accusing others of sexual harassment or assault. If there were no Section 230, the #MeToo movement might not have gained traction, said Stanford’s Keller.
In 2018, after Congress passed a law weakening Section 230 if a company knowingly facilitated sex trafficking, Craigslist, a website for classified ads, closed its personals section altogether.
Social media companies have tried to rid their platforms of hate speech, some sexual content, and misinformation on elections and Covid-19. Facebook, YouTube and Twitter often rely on algorithms -- and sometimes human reviewers -– to detect falsehoods on these topics. Google and Twitter have created a variety of tools to fight disinformation, such as applying labels to misleading posts, reducing the spread of conspiracy theories and penalizing users who routinely break the rules.
But those efforts have failed to catch a steady stream of posts that violate the companies’ rules, from white supremacy groups that use social media to organize events that might result in violence, to anti-vaxxers who peddle false information about Covid-19 vaccines. Facebook allowed Trump to flout its voter-suppression rules when he questioned the legitimacy of mail-in ballots.
Then there are cases like Matthew Herrick’s. He sued Grindr, the LGBT-friendly dating app, alleging his ex-boyfriend created fake profiles of him and led hundreds of men to his home and workplace. His lawsuit argued that Grindr is a defective product and that he was harmed because its platform was easily manipulated.
Grinder said in a statement that in Herrick’s case the company “worked closely with law enforcement and took extensive steps to delete and ban fraudulent accounts.”
The U.S. Court of Appeals for the Second Circuit ruled against Herrick, citing Section 230. The Supreme Court declined to review the case.
“There was no one else in a position” to stop the harassment besides the platform itself, said Carrie Goldberg, Herrick’s lawyer. “But Grindr said that they had no liability to Matthew because of Section 230.”
© Copyright 2021 Bloomberg News. All rights reserved.