The Internet’s First Amendment: Section 230’s Freedom of Digital Speech

(Note to readers – this is a reprint of a research paper done in 2020. Since Section 230 is in the news, I thought it might be helpful to share.)

Section 230 of the Communications Decency Act (47 U.S.C. §230), enacted in 1996, mostly immunizes internet platforms from liability stemming from user generated content (UGC). It both shields platforms, and gives them permission to moderate comments without being vulnerable to legal action.

Section 230 has been called the First Amendment of the Internet, but unlike the First Amendment, there have not been definitive court decisions on the scope of some of its protections in the decades it has existed. The internet that it addressed at its conception has changed significantly in the ensuing years. Websites, including blogs, have transformed, and in many ways, supplanted traditional news media.

Section 230 was limited in 2018 by the Allow States and Victims to Fight Online Sex Trafficking Act of 2017 (FOSTA). It has been in the news recently because of the use of social media by ‘hate crime killers’ and other scandalous actors, with bi-partisan proposals to limit its protections for platforms. Will Section 230 remain robust in the face of distasteful, disruptive, and disgusting speech?

This paper examines the current state of Section 230 and potential and proposed limits on it. Changes could impact the news media and the internet as we know it.


Section 230 [§230] is generally referred to as the First Amendment of the Internet, and as the most important law of the internet. Its enactment came in 1996 as part of the Telecommunications Act, within the Community Decency Act. It gave the fairly nascent industry room to robustly grow and change. §230 is two pronged: it protects platforms from liability torts when they moderate, or do not moderate user generated content.

It reads simply:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

The way the internet exists is because of §230.

§230 came into existence primarily because of two court cases that were resolved in different ways: Cubby Inc. v. CompuServe Inc (1991), and Stratton Oakmont, Inc. v. Prodigy Services Co (1995).

In the former, the court held that CompuServe was a distributor of content, not publisher and was not liable, building on earlier case law established in Smith v. California. At issue was a dispute between Rumorville and Skuttlebut, two competing online subscriber forums. Cubby, Inc. complained to the courts that Rumorville libeled it and it’s creators, and that CompuServe published the information. The court framed CompuServe as a ‘for-profit library’ and did not have knowledge of the defamatory content before publication (Cubby Inc v. Compuserve, 1991).

In the latter case, the court decided that since Prodigy used some editorial control with the content, they were liable. The plaintiffs and the court cited Prodigy’s published policies and declarations of editorial control over user generated content, which aligned with publisher, rather than distributor, characteristics (Stratton Oakmont v. Prodigy, 1995).

The dilemma of these seemingly conflicting decisions needed to be resolved. Representatives (at the time) Chris Cox and Ron Wyden proposed the language that would become §230, which passed in Congress and became law in 1996.

The Communications Decency Act’s anti-indecency codicil was immediately challenged, and finally doomed unanimously at SCOTUS in Reno v. ACLU. §230 survived that ruling.

Important milestones in §230

§230 was designed to accomplish several goals. As with the First Amendment, there were worries that there would be collateral censorship, or limiting others’ speech so you are not liable, and therefore concerns about intermediary liability.

New laws are passed by Congress but shaped by the courts. There have been numerous cases building §230. When enacted, there were several areas of law where §230 would not apply as part of the original act: federal criminal, intellectual property, state (consistent with §230), communications privacy, and sex trafficking. How else did the courts expand or contract the protections of §230?

The courts recognize the revolutionary change the internet has brought to modern society. Supreme Court Justice Kennedy wrote in Packingham v. North Carolina,

While we now may be coming to the realization that the Cyber Age is a revolution of historic proportions, we cannot appreciate yet its full dimensions and vast potential to alter how we think, express ourselves, and define who we want to be. The forces and directions of the Internet are so new, so protean, and so far reaching that courts must be conscious that what they say today might be obsolete tomorrow (p.6).

And In Chicago Lawyers’ Committee for Civil Rights v. Craigslist, heard in the 7th Circuit Court of Appeals, Chief Justice Easterbrook typified the message that multiple courts sent, writing

Using the remarkably candid postings on craigslist, the Lawyers’ Committee can identify many targets to investigate. It can dispatch testers and collect damages from any landlord or owner who engages in discrimination….It can assemble a list of names to send to the Attorney General for prosecution. But given § 230(c)(1) it cannot sue the messenger just because the message reveals a third party’s plan to engage in unlawful discrimination (Chicago Lawyers v. Craigslist, 2008).

One of the cases which continues to be cited by both supporters and detractors of §230 is Zeran v. America OnLine, decided in 1997. Kenneth Zeran believed that AOL was negligent in removing content which prompted extreme harassment of him. Zeran labeled AOL as a publisher, not a distributor, and then also claimed that since the harassment occured before the enactment of §230, it did not retroactively protect AOL. The appeals court denied his appeal.

Zeran was the first big test of §230. Eric Goldman, Professor of Law at Santa University School of Law, called it “the most important court ruling in Internet Law (Goldman, p.3).”

According to Goldman and others, Zeran established very broad protection for the platforms. They are protected even when they are aware of defamatory content and even if they fail to screen for defamatory content. Goldman seems to believe that online platforms have First Amendment protections beyond analog distributors,

The ruling also made clear that Section 230 protects websites’ decisions about publishing, editing, or removing third-party content-activities that, in the offline world, would ordinarily dictate a publisher’s liability for that content (Goldman, p.3).

The first limits

A case that put ultimately narrow limits on §230 protections was Fair Housing Council of San Fernando Valley v. The Ninth Circuit Court of Appeals unanimously found that was not protected by §230 because it solicited content (through its questionnaires) and used content (roommate search matching), being both a content provider and a content maker.

Roommates has a convoluted history though. Brought to a district court in 2004, it was then argued in 2006 to the Ninth Circuit Court of Appeals. In a panel decision, the court reversed the decision that §230 protected the website from accusations of violating the Fair Housing Act.

However, each of the judges wrote concurring opinions, with one judge also dissenting in part. The case was heard again, en banc (by the whole Ninth Circuit court) in 2008. The court held that Roommates was liable in part, for the content and algorithms they created, and also still protected from liability for any comments posted on the site.

Then Roommates had another appearance in that court, in 2011, where the scope was limited to whether the Fair Housing Act applied to The court found that it didn’t. Imagine if they had settled that 4 years earlier – there wouldn’t be the 2008 decision on §230.

Another blow

Oh the irony, when finding the headline, Big Win For Free Speech Online In Backpage Lawsuit. It seems that big win prompted stronger efforts to create the first incursion into regulating §230. The case was Doe v. Backpage, one of numerous lawsuits against the now shuttered website, this one by victims of sex trafficking. First Circuit Judge Selya wrote in the decision

This is a hard case—hard not in the sense that the legal issues defy resolution, but hard in the sense that the law requires that we, like the court below, deny relief to plaintiffs whose circumstances evoke outrage…Congress later addressed the need to guard against the evils of sex trafficking when it enacted the Trafficking Victims Protection Reauthorization Act of 2008 (TVPRA), codified as relevant here at 18 U.S.C. §§ 1591, 1595. These laudable legislative efforts do not fit together seamlessly, and this case reflects the tension between them (Doe v. Backpage, 2016).

But deny relief they did, based on established law in §230. That and other cases impelled organizations and others to pressure Congress into changing the law.

FOSTA fallout

Concerns about sex trafficking and a perceived lack of authentic effort by the large internet platforms coalesced in 2017 as Congress changed SESTA (Stop Enabling Sex Traffickers Act) to FOSTA (Allow States and Victims to Fight Online Sex Trafficking Act of 2017). It was the first Congressional law to limit §230:

To amend the Communications Act of 1934 to clarify that section 230 of such Act does not prohibit the enforcement against providers and users of interactive computer services of Federal and State criminal and civil law relating to sexual exploitation of children or sex trafficking, and for other purposes (H.R.1865, 2018).

Does writing that the language in FOSTA is a clarification leave the door open for further clarifications?

The amendment also contains provisions for holding online advertisers liable if they profit from sex trafficking. Is that prior restraint? So far there doesn’t seem to be any court activity around it.

FOSTA has not received unanimous support from critics of §230. Just the title of one article from active academic critics of §230 lets you know their stance: FOSTA: The New Anti-Sex-Trafficking Legislation May Not End the Internet, But It’s Not Good Law Either

The authors argue that vague terms like “knowing facilitation” and other requirements create a “moderators’ dilemma” leading to either over moderation or an absence of it. Worse, what they see as a piecemeal approach to the law will cause further confusion

and harm, instead of creating a comprehensive standard of responsibility balancing illegal content and free speech (Citron & Jurecic, 2018).

The Department of Justice’s Assistant Attorney General Boyd testified to Congress that the act might make it harder to catch criminals, and criminalize past behavior (Boyd, 2018). Issues identified by supporters of FOSTA fall into two areas. First, the term “knowing facilitation” is vague, and second, it addresses only one problematic area of unfettered speech, instead of comprehensively addressing terrorism, hate speech, harassment, and defamation.

Senator Ron Wyden, one of the original drafters of §230, tweeted when FOSTA was passed,
ron wyden tweet: today we take a real step backward
The Electronic Frontier Foundation is supporting a group suing the government to overturn FOSTA in Woodhull Foundation et al v. United States. Their original petition was dismissed by a federal judge for lack of standing. Their appeal was heard this past September in the Court of Appeals for the D.C. Circuit. There has not been a decision yet. Plaintiffs are asking for an injunction against FOSTA, determining that it is overbroad and vague, causing them to censor themselves. (Update to this section: The EFF continues to fight against incursions on the First Amendment by FOSTA, now in an appeals court in D.C.)

Perceived problems with §230

Youtube carries videos of live-streaming massacres, beatings, rapes, and other violent assaults. Hate groups have invited new acolytes into chat rooms, and acolytes have found instructions on how to kill multiple people quickly on Reddit. Videos of children being sexually and physically abused are shared with Dropbox. All of the worst possible things have become available online. Vulnerable people are targeted.

Is it any wonder that there is a groundswell of anger against the companies that seem to be doing little to control the terrible content on their platforms?

Professor Mary Ann Franks, of the University of Miami’s Law School, wrote in Moral Hazard on Stilts: Zeran’s Legacy,

Today, the Internet is awash in threats, harassment, defamation, revenge porn, propaganda, misinformation, and conspiracy theories, which disproportionately burden vulnerable private citizens including women, racial and religious minorities, and the LGBT community (Franks, 2017).

Harassment and other abuse suppresses the speech of more vulnerable populations, according to a report by the Data & Society Research Institute, causing more than a quarter of users to self-censor, and more than 50% of self-censorship among targets of online harassment (Lenhart et al, 2016).

Dr. Franks argued in the article and elsewhere that by so wholly shielding internet companies from distributor liability, they are less responsive in moderating their sites, rather than meeting the goal of the CDA and §230.

She believes that there is little motivation for companies to initiate significant changes because of the §230 provisions. Why spend valuable time and money when you control the market and don’t have to?

Other critics state that the courts have broadened §230’s protections far beyond its original intentions.

The court’s wide interpretation of §230 led to immunity so sweeping that it protects Airbnb from housing discrimination lawsuits. It shields revenge porn sites like The Dirty that post user submissions; sites hosting non-consensual pornography; and companies like AOL from being responsible for defamation by writers that it exclusively pays (Blumenthal, 2018).

There are areas where the platforms are not neutral. For example, the website The Dirty’s business model relies solely on advertising and salacious user generated content. Could that website thrive if it were be held liable for revenge porn postings? Jones v. Dirty World, a Sixth Circuit Court of Appeals decision, upheld §230’s protection for the website, even though it would create content around user submitted photos and videos.

There has been bipartisan distaste for §230, partially prompted by Big Tech’s handling of a broad range of issues. When both Ted Cruz and Nancy Pelosi feel the same way about a law, something is afoot (Wakabayashi, 2019).

Senator Mark Warner put out a white paper on regulating Big Tech, including possible revising or repealing §230. However in an interview he talked about how online identity transparency could diminish the need for a change.

“If someone had to own their content with their real identity … you might need less movement on 230,” he said. “Because if we’re trying to think, ‘How do we at least slow or make people think a little bit?’ … you know who’s who (Johnson,E. 2019).”

Warner is also looking at how the algorithms of the biggest websites not only collect user data, but guide users to specific content, some of it violent, threatening, or illegal. Transparency in algorithms might also shield §230 from federal changes. It seems though that tech companies would have to be compelled by law to be more transparent.

Regulating tech is a subject that both the president and presidential candidates have discussed frequently.

On the opposite side of the aisle, Josh Hawley of Missouri and others have proposed that online companies lose their §230 protections if they don’t pledge to political neutrality (Brown, 2019), tweeting

He proposed the Ending Support for Internet Censorship Act. This bill prohibits a large social media company from moderating information on its platform from a politically biased standpoint. It doesn’t seem possible that it will pass for the obvious First Amendment conflict.

In October 2019, Representative Mike Doyle led a substantive panel discussion, asking leaders of Reddit, Google, and others,

What is the solution between not eliminating 230 because of the effects that would have on the whole internet and making sure we do a better job of policing?”

Just a few days later, as Facebook announced the steps they were taking to fight false election information, it was also announced that there would not be fact checking on the political ads placed on the platform. Since then, the company has released the idea that it is considering labeling political ads as not fact-checked, but hasn’t made that policy (Romm et al, 2019).

There are valid concerns of over-moderation on websites, but a bigger fear of proposed changes to regulations is that they will achieve the opposite of what §230 was intended to do: censor speech and kill websites. The large companies can afford to fight political battles, but smaller sites don’t have their assets.


Both free speech and tech advocates robustly support §230. They view its benefits as vital not only to the health of online platforms, but as a bedrock of free speech in the 21st century. The world is becoming more digital. Almost 60% of the global population can get online, and almost 90% of North Americans have internet access.

Organizations like The Electronic Frontier Foundation, the Knight First Amendment Institute, and the Reporters Committee for Freedom of the Press all argue for keeping §230’s protections in place.

There are no global standards for online free speech. §230-like protections have been inserted into trade agreements with Canada, Japan, and Mexico, and are proposed in other trade talks (McCabe & Swanson, 2019). The US might be trying to shape the global standards for tech companies’ protections.

The European Union has taken far stricter stances on privacy than the US, and China has its own internet and regulations. American tech companies have to figure out global strategies, and it is in their favor to promote the protections of §230.

There is also support for the type of speech which is often vilified in Congress and in various talking points – the news. While Youtube might create algorithms that prevent viewers from seeing violent content, some of that content could be vital community journalism from battles in Syria (Browne, 2017), or natural disasters around the world.

While much of the current distaste for §230 is directed at social media sites, newsrooms are beneficiaries of §230 as well. Not only does every major newsroom have extremely active, and interactive, websites, but comments and user generated content on those websites generate information for both the news and administrative sides of the newsroom.

Smaller publications rely heavily on user generated content, with contributors arrayed throughout their community. Checking every line of every story and every photo may not be possible for a small online publication, which could be the only local news source in an area.


The First Amendment of the Internet, §230, probably won’t stay the same for long, and it will be those who benefited from it the most that may cause its undoing. The top companies in the world (1-4: Apple, Microsoft, Amazon, Alphabet [Google’s parent],and #s 6 & 7, Facebook and Alibaba) (Statista, 2019) don’t want it changed; seem slow to respond with appropriate measures; or are tonedeaf in their responses.

§230 was developed long before social media as we know it today, and before the widespread use of algorithms on those websites. Use of the algorithms in sharing and targeting certain users with types of user generated content might reframe the platforms as publishers, since they are taking a more active role in content distribution.

The Zeran decision seems to have great relevance to the current discussion about social media’s distribution of harmful speech. Companies such as Facebook, Youtube, Google, and others may choose to keep content available even though they have received notification that it is false, defamatory, or inflammatory.

Breaking up Big Tech is a theme on the campaign trail, but whether that includes altering §230 isn’t clear. Lawmakers have warned Big Tech that unless they improve through transparency or active action on their policies, protections the platforms have benefitted from may not stay the same.

The New York Times recently published an article, Child Abusers Run Rampant as Tech Companies Look the Other Way, outlining the many ways pictures and videos of children being sexually abused remain active online (Keller & Dance, 2019). Are tech companies protected for their part in providing the ability for users to share and profit from that content? Balancing privacy and safety, and of course profit, come in to play.

There are a couple other factors which might affect the future of §230. The first is completely unknown – will future generations need or value §230? How will expectations of digital use and content evolve? 2019 marks the 50th year of the first computer-to-computer connection. What happens in the next 50?

A more specific and tangible factor will be the global market for Big Tech companies. §230 is a very American ideal, built upon the First Amendment. The European Union, United Kingdom, China, and others around the world have differing priorities.

If the platforms using user generated content have to change their policies to have access to the global markets, that will affect how they address the issues inherent in the untamed internet. It might turn out that American lawmakers, courts, and companies will not be the ones to shape the future of §230.

References – Court Cases

Chicago Lawyers’ Committee for Civil Rights v. Craigslist, No. 1:2006cv00657 (N.D. Ill. 2006)

Cubby, Inc. v. Compuserve Inc., 776 F. Supp. 135 (S.D.N.Y. 1991)

Fair Housing Council of San Fernando Valley v., 521 F.3d 1157 (9th Cir. 2008) & 666 F.3d 1216 (9th Cir. 2012)

Jane Doe v. Backpage Inc., 817 F.3d 12 (1st Cir. 2016)

Jones v. Dirty World, 755 F.3d 398 (6th Cir. 2014)

Packingham v. North Carolina, 137 S. Ct. 1730 (2017)

Reno v. ACLU, 521 U.S. 844 (1997)

Smith v. California, 361 U.S. 147(1959)

Stratton Oakmont, Inc. v. Prodigy Services Co., 1995 WL 323710 (N.Y. Sup. Ct. 1995)

Woodhull Freedom Foundation et al v. United States Of America et al, No. 1:2018cv01552 (D.D.C. 2018)

Zeran v. America OnLine, 129 F.3d 327 (4th Cir. 1997)


Balkin, J. M. (1999). Free Speech and Hostile Environments. Columbia Law Review, 99(8), 2295.

BEAUSOLEIL, L. E. (2019). Free, Hateful, and Posted: Rethinking First Amendment Protection of Hate Speech in a Social Media World. Boston College Law Review, 60(7), 2101–2144. Retrieved from

Blumenthal, P., (August 6, 2018). The one law that’s the cause of everything good and terrible about the internet. Retrieved from

Boyd, S. E., (February 27,2018). Letter from Department of Justice Assistant Attorney General to Congress. Retrieved from

Brown, E.N., (July 29, 2019). Section 230 is the internet’s First Amendment. Now both Republicans and Democrats want to take it away. Retrieved from

Browne, M., (August 22, 2017). YouTube removed videos showing atrocities in Syria. Retrieved from

Citron, D., & Jurecic, Q., (March 28, 2018). FOSTA: The new anti-sex-trafficking legislation may not end the internet, but it’s not good law either. Retrieved from

Electronic Frontier Foundation (n.d.). Section 230 of the Communications Decency Act. Retrieved from

Electronic Frontier Foundation (n.d.). Section 230 – bloggers’ legal liability. Retrieved from

FOSTA: H.R.1865 – Allow States and Victims to Fight Online Sex Trafficking Act of 2017 H.R.1865 — 115th Congress (2017-2018)

Franks, M.A., (November 10, 2017). Moral hazard on stilts: ‘Zeran’s’ legacy. Retrieved from

Goldman, E., (March 17, 2016). Big win for free speech online in Backpage lawsuit. Retrieved from

Goldman, E. (2017). The Ten Most Important Section 230 Rulings. Tulane Journal of Technology and Intellectual Property, 1. Retrieved from

Harmon, E., (October 16, 2019). Changing Section 230 would strengthen the biggest tech companies. Retrieved from

Harvard Law Review (May 10, 2018). Section 230 as First Amendment rule. Retrieved from

Hawley, J., Senator (June 19, 2019). Ending Support for Internet Censorship Act. S.1914 — 116th Congress (2019-2020)

Hawley, J., Senator (November 27, 2018) Tweet. Retrived from

Internet World Stats (2019). Internet usage statistics. Retrieved from

Johnson, E., (September 30, 2019). Three big ideas for tech regulation from Senator Mark Warner. Retrieved from

Keller, M.H.m, & Dance, G.J. (November 9, 2019), Child abusers run rampant as tech companies look the other way. Retrieved from

Kosseff, J., (2019). The twenty-six words that created the internet. Ithica, NY: Cornell University Press.

Laslo, M., (August 13, 2019). Fight over section 230 as we know it. Retrieved from

Lenhart, A., Ybarra, M., Zickuhr, K., & Price-Feeney, M., (November 21, 2016). Online harassment, digital abuse, and cyberstalking in America. Retrieved from

Martineau, P., (October 17, 2019). An actual debate over the internet’s favorite legal shield. Retrieved from

McCabe, D. & Swanson, A., (October 7, 2019). U.S. using trade deals to shield tech giants from foreign regulators. Retrieved from

Moore, D., (October 16, 2019). Doyle wades into debate over liability shield for internet publishers in era of hate speech, illegal activity. Retrieved from

Reporters Committee for the Freedom of the Press (2014). Republication in the Internet age. Retrieved from

Romm, T. & Stanley-Becker, I., (December 4, 2019). Facebook has floated limiting political ads and labeling that they aren’t fact-checked, riling 2020 campaigns. Retrieved from

Rosen, G., Harbath, K., Gleicher, N., & Leathern, R., (October 21, 2019). Helping to protect the 2020 US elections. Retrieved from

Statista (2019) The 100 largest companies in the world by market value in 2019 (in billion U.S. dollars). Retrieved from

Wakabayashi, D., (August 6, 2019). Legal shield for websites rattles under onslaught of hate speech. Retrieved from

Warner, M., Senator, (July 23, 2018) Potential policy proposals for regulation of social media and technology firms. Retrieved from

Wyden, R., Senator, (March 21, 2018). Tweet. Retrieved from

Zara, C., (January 3, 2017). The most important law in tech has a problem. Retrieved from

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.