The Supreme Court of the United States (SCOTUS) is in session right now. This term, the nine justices of the highest court in the U.S. are hearing two cases – Gonzalez v. Google LLC and Twitter, Inc. v. Taamneh – that challenge the legitimacy of Section 230 of the Communications Decency Act and its effects on online content moderation.
But just what is Section 230? And why should you care?
The Communications Decency Act and Section 230
To understand what Section 230 is and why it’s relevant, we actually need to go back almost 30 years to the late 1990s. During that era, the internet was just becoming a household presence, and the federal government was clamoring to figure out an appropriate regulatory framework to handle this new communications medium.
In 1996, Congress passed the Communications Decency Act (CDA), which was predominantly aimed at putting guardrails around online pornography distribution. However, the CDA ultimately had the effect of setting policy for all internet communication. Section 230 of this Act effectively said that online service providers could not be held liable for content that was published on their websites or platforms by subscribers.
The Zeran v. America Online Case
Around the same time as the CDA went into effect, a major lawsuit emerged and immediately tested the actual application of its rules. Ken Zeran was a real estate professional from Seattle. One day in April 1995, Zeran began receiving very strange phone calls on his cell phone. Angry people were calling him, screaming and yelling, one after the other.
After a bit of research, Zeran discovered that his name and cell phone number had been posted to an America Online (AOL) message board in association with an illegitimate advertisement. The advertisement was for specialty T-shirts that he was purported to have made.
The advertised shirts bore extremely offensive messages about the recent Oklahoma City bombing, involving an incident when a domestic terrorist blew up a government building and killed nearly 200 people.
As an example, one shirt’s message said, “Come to Oklahoma…it’s a BLAST!!!”, according to NPR. The advertisement specifically told viewers to call Ken’s cell phone number and “ask for Ken.” It also said to keep calling if the line was busy.
But Zeran did not make those offensive T-shirts and had nothing to do with this posting on AOL. Clearly, he was the victim of some kind of nasty prank.
To stop the calls, he contacted AOL and advised them of the defamatory content on their site. After some discussion, Zeran managed to convince AOL to eventually remove the advertisement. However, AOL declined to post a retraction statement or any other kind of communication to advise their subscribers that the original posting was a hoax.
So Zeran sued AOL in federal court, according to CaseText. AOL invoked Section 230 as an affirmative defense, and the District Court granted summary judgment in favor of AOL.
Later, Zeran appealed the decision, and the case went up before the 4th Circuit Court of Appeals. On appeal, Zeran made a number of different clever arguments to try to sidestep the prescriptions of Section 230.
For instance, Zeran argued that although Section 230 immunized “publishers” from liability for content from third parties, online platforms should still be liable because they were acting more like distributors of content than publishers themselves. In the same line of thinking, a newspaper does not usually write much, if any, of the content in its editions, but instead just distributes the news after it is written.
Zeran also argued that AOL and other providers should have liability under ordinary negligence laws as soon as they are notified that defamatory content has been published on their site. Once receiving such a notice, said Zeran, that should trigger an immediate duty to investigate, remove the illegal content and notify readers of the correction.
But the court ultimately disagreed with Zeran. In their opinion, the appellate judges explained that Zeran’s arguments did not hold water because his interpretation of Section 230 ran contradictory to several specific aims of the CDA.
First, Section 230 was drafted in such a way so as to promote free speech on the internet. Congress worried that, if online service providers like AOL were saddled with potential liability for each and every dumb or illegal thing that a user posted to their website, they would probably severely restrict posting permissions – both in quantity and quality.
These restrictions would then have the effect of “chilling” or limiting free speech and expression online. So instead, Congress included Section 230 to afford companies like AOL immunization from liability for such content to ensure that online platforms were not pressured to suppress free speech.
On the other hand, another aim of Congress was to encourage companies like AOL to monitor the content on their sites and make good-faith efforts to self-regulate in ways appropriate for the rules of each platform. The assumption was that the private sector would be best equipped to navigate this new frontier of communication and speech online.
But legislators also realized that if Section 230 were to impose requirements on such companies to act following notice of concerning content, they would need to investigate and make decisions about each and every post in real time. This work could actually incentivize the platforms to actively avoid monitoring and policing their users’ content, in order to evade ever knowing about such problems and having to act on them.
These aims make sense, after all. Consider, as an example, the modern-day state of traffic on the popular social media site Facebook. According to Bernard Marr & Co., “There are five new Facebook profiles created every second! More than 300 million photos get uploaded per day. Every minute there are 510,000 comments posted and 293,000 statuses updated.”
Now imagine if Facebook was required to police the content of each and every post for truth and legitimacy. How many millions of employees would they need just to be able to look at each piece of content, even if only for one second? After that, consider that many concerning posts might require research to determine a post’s truth value, privacy issues and so on.
It’s Not Possible to Regulate Everything That Is Said Online
The reality is that such expectations for policing online content just aren’t workable. They weren’t workable in 1996 when commercial online services had barely 12 million subscribers. And they aren’t workable now that roughly 5 billion people – or about 60% of the world’s population – are online, according to Zippia.
So as much as any compassionate person might empathize with what Zeran went through because of the nefarious acts of some internet troll, he still lost his case. Internet communication platforms enjoy immunity from liability for the content of posts made by users. That is, at least, for now.
Decisions in the New SCOTUS Cases Could Change Section 230 and Liability for User Content
As I noted previously, SCOTUS is currently hearing arguments in two cases – Gonzalez v. Google, LLC and Twitter, Inc. v. Taamneh – both of which are aiming to shake up the foundation laid by Section 230 according to TIME magazine.
Interestingly, both cases involve claims related to the Islamic State of Iraq and Syria (ISIS) terrorist attacks. In addition, they involve allegations that failed self-regulation from social media platforms like YouTube and Twitter allowed terrorist groups to use these websites to promote themselves, recruit members, plan attacks, and ultimately kill hundreds of people.
It’s too early to tell where the Supreme Court will land on these cases. Some experts believe that the justices may invalidate Section 230 and impose some new level of required content moderation for online platforms. If that turns out to be the case, it will likely drastically change the way that we interact with the internet and with each other online.
Either way, these Section 230 cases will undoubtedly present a challenge for the Supreme Court in trying to balance the policy concerns originally articulated in the Zeran case with public safety concerns emerging around modern-day use of the internet. It will be interesting to see what compromises are made and what priorities prevail – stay tuned for the SCOTUS decision later this year.