Last updated 4 days ago. Our resources are updated regularly but please keep in mind that links, programs, policies, and contact information do change.
You post a political opinion on Facebook. Hours later, a notification appears: your content has been removed. Your account is restricted. In some cases, you’re permanently banned.
The immediate reaction for Americans is outrage, often framed around a fundamental question: “What about my right to free speech?”
This question cuts to the heart of a complex intersection of constitutional law, federal statutes, and private corporate power. While social media platforms like Facebook feel like the modern public square, the legal framework governing them is fundamentally different from actual public spaces.
Understanding these layers explains why your free speech rights as a citizen don’t typically extend to controlling what you can say on a privately owned digital platform.
The Constitution’s Reach: Understanding the First Amendment
The foundation of the “free speech” argument in America is the First Amendment. However, its power to protect speech has very specific and widely misunderstood boundaries. At its core, the amendment restricts government, not private citizens or corporations. This distinction is the most important concept for understanding why Facebook can legally set its own speech rules.
What the First Amendment Actually Says and Does
The First Amendment begins with five crucial words: “Congress shall make no law…” The full text of the speech and press clauses reads: “Congress shall make no law… abridging the freedom of speech, or of the press…”
By its plain text, the amendment was designed as a direct constraint on the federal legislature. Through the “doctrine of incorporation,” the Supreme Court has extended these restrictions, via the Fourteenth Amendment, to apply to all levels of government, including federal agencies, state governments, and local municipalities.
This means a city government cannot pass an ordinance preventing you from peacefully protesting on a public sidewalk. A state university cannot expel a student merely for expressing an unpopular political view. The federal government cannot imprison a journalist for publishing critical reporting. These are all examples of government action.
However, the First Amendment’s protections do not extend to the actions of private entities. A private company, such as Facebook, is not a government body. Therefore, it is not directly bound by the First Amendment’s command to not abridge freedom of speech. This allows private organizations, from workplaces to social clubs to online platforms, to establish their own rules of conduct and speech for their members and users.
Even against government, the right to free speech is not absolute. Courts have long recognized that government can lawfully restrict certain categories of speech not protected by the First Amendment, such as incitement to imminent lawless action, defamation, obscenity, true threats, and fraud.
The “State Action” Doctrine: When Private Companies Become Government
While the default rule is that the Constitution doesn’t apply to private companies, there are narrow exceptions under the “state action” doctrine. This doctrine provides the test for determining whether a private entity’s conduct is so entangled with government that it should be treated as a government actor for constitutional purposes.
The Supreme Court has identified limited circumstances where a private company can be considered a state actor:
The Public Function Test: This applies when a private entity performs a function that has been “traditionally and exclusively” reserved for the state.
The Government Compulsion Test: This applies when government has coerced or significantly encouraged the private entity to take a particular action.
The Joint Action or Entwinement Test: This applies when government and the private entity are acting in concert or their operations are deeply intertwined.
For the social media debate, the “public function” test has been most relevant. The classic example comes from the 1946 Supreme Court case Marsh v. Alabama. The Court examined a “company town”—a town entirely owned by a private corporation, Gulf Shipbuilding. The town had its own streets, sidewalks, and residential areas, functioning just like any other municipality.
When a resident was arrested for distributing religious literature on a sidewalk in the company town, she claimed her First Amendment rights were violated. The Supreme Court agreed, ruling that because the company operated the “full spectrum of municipal powers,” it was performing a public function and was therefore subject to the Constitution. The Court reasoned that residents of a company town have the same need for “uncensored information” as residents of any other town.
The Modern Public Square Argument
The Marsh case has served as the intellectual foundation for the argument that large social media platforms are the “modern public square.” Proponents argue that because platforms like Facebook and X (formerly Twitter) host so much of our nation’s political and social discourse, they have effectively taken on the role of the traditional town square or public park.
The Supreme Court itself has used this metaphor, most notably in the 2017 case Packingham v. North Carolina, where it described social media as “the modern public square” and one of the “most powerful mechanisms available to a private citizen to make his or her voice heard.”
However, a poetic metaphor is not a legal rule. The legal viability of the “modern public square” argument was tested directly in the 2019 Supreme Court case Manhattan Community Access Corp. v. Halleck. The case involved a private, non-profit corporation, Manhattan Neighborhood Network (MNN), that was designated by New York City to operate public access television channels.
When MNN suspended two producers for airing a film critical of the network, the producers sued, arguing that MNN was a state actor performing the public function of running a public forum and had thus violated their First Amendment rights.
In a 5-4 decision, the Supreme Court rejected this argument. Writing for the majority, Justice Brett Kavanaugh held that MNN was not a state actor. The opinion established a critical legal distinction: merely providing a forum for speech is not a function that has been “traditionally and exclusively” performed by government.
The Court noted that throughout history, private actors have owned and operated forums for speech, from lecture halls to newspapers. For an entity to be considered a state actor under the public function test, the function it performs must be one that government has historically performed on an exclusive basis, such as running elections or governing a town.
The Halleck decision effectively closed the door on the most straightforward legal path to treating social media platforms as state actors. The ruling clarifies that the “public square” is a description of a platform’s societal role, not its legal status.
Under current constitutional law, a private company’s property doesn’t become a public forum subject to the First Amendment simply because the public uses it to speak. This reveals a deep disconnect between the functional reality of the digital age—where private platforms host the bulk of public debate—and a legal doctrine that remains tied to a rigid, historical test of government action.
The Law That Built the Modern Internet: Section 230
While the First Amendment explains why government cannot force Facebook to host certain speech, a specific federal statute explains why Facebook is legally empowered to moderate its platform and is protected from most lawsuits over its decisions. That law is Section 230 of the Communications Decency Act of 1996, legislation that has been called “the twenty-six words that created the internet.”
The “26 Words That Created the Internet”
In the early 1990s, as online forums began to emerge, courts produced two conflicting rulings that created a “moderator’s dilemma” for platform operators. In one case, a court ruled that a platform that didn’t moderate its content was like a bookstore—not liable for the content on its shelves. In another case, a court ruled that a platform that did engage in some moderation was acting like a publisher and could therefore be held liable for all user-generated content on its site.
This created a perverse incentive: either moderate nothing and allow your platform to become a cesspool of illegal and harmful content, or moderate anything and risk being sued into oblivion.
Congress stepped in to resolve this dilemma by passing Section 230. The stated policy of the law was to promote the continued development of the internet, preserve a competitive free market “unfettered by Federal or State regulation,” and encourage platforms to develop tools to block and filter objectionable material. It sought to remove the disincentives for platforms to self-regulate their content.
The Two Shields of Section 230
Section 230 achieves its goals by providing two distinct but related legal shields. These protections work together to give platforms broad discretion over the content they host.
Shield 1: Immunity as a “Publisher or Speaker”
The most famous part of the law is Section 230(c)(1), which states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
This is the core liability shield. An “interactive computer service” is broadly defined to include any online platform that enables users to post content, like Facebook. An “information content provider” is the person or entity that creates the content—in this case, the user who writes a post or uploads a video.
This provision means that if a user posts something defamatory, illegal, or harmful on Facebook, the user can be held liable, but Facebook generally cannot be sued for simply hosting it. The law treats the platform as a distributor (like a newsstand or library), not the publisher (like the author of the book).
Shield 2: The “Good Samaritan” Right to Moderate
The second shield is what directly empowers Facebook to ban users and remove content. Section 230(c)(2) provides legal immunity for: “…any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”
This “Good Samaritan” provision is a powerful grant of authority. The key phrases are “considers to be” and “otherwise objectionable.” This establishes a subjective standard; the platform itself gets to decide what it finds objectionable.
The phrase “whether or not such material is constitutionally protected” is also critical. It means that Facebook can remove content that is perfectly legal speech under the First Amendment without fear of being sued for doing so. This provision directly solves the moderator’s dilemma by protecting the act of moderation itself.
How Courts Have Interpreted Section 230
Since its passage, courts have consistently interpreted Section 230’s protections broadly. In the influential case Zeran v. America Online, Inc., a federal appeals court held that the publisher immunity bars lawsuits that would hold a platform liable for its “traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content.”
This broad interpretation has been extended to cover not just decisions to remove content but also decisions about how to organize and present it, including through the use of algorithms to curate news feeds.
While the law grants a powerful right to moderate, it is not a blank check. The law doesn’t protect platforms from liability for their own content, nor does it provide immunity from federal criminal laws, intellectual property claims (like copyright infringement), or, following a 2018 amendment known as FOSTA-SESTA, certain federal and state sex trafficking laws.
Ultimately, Section 230 doesn’t create a public right to speak; it creates a private right for platforms to moderate. While the law’s stated policy goals include fostering a “diversity of political discourse,” its primary mechanism is to empower private companies to set their own rules, shielded from most legal challenges.
The platform’s choice of what to allow will inevitably be driven by its own business model and brand considerations, not necessarily by a commitment to abstract free speech ideals. This is why the “Good Samaritan” label can be seen as misleading; the law protects any good-faith business decision to remove content a platform deems “objectionable,” a term the platform itself gets to define.
The Fine Print: Facebook’s Terms of Service and Community Standards
Beyond constitutional and statutory law, the most direct reason Facebook can ban a user is much simpler: the user agreed that it could. Using Facebook is not an inherent right; it is a service governed by a contract. When a user creates an account, they enter into a legally binding agreement, and a ban is often the result of the company enforcing the terms of that agreement.
The Contract You Sign: Agreeing to the Terms of Service
Every Facebook user, upon signing up, must agree to Meta’s Terms of Service (ToS). These terms, which are periodically updated, form the contract between the user and the company. By clicking “agree,” the user consents to abide by a set of rules and grants the company specific rights and permissions.
Key provisions in the ToS include a user’s commitment not to engage in certain behaviors, such as posting content that is hateful, threatening, or incites violence. The terms also explicitly state that Meta can remove content or disable accounts for violations of these rules or its more detailed Community Standards.
Furthermore, users grant Meta a “non-exclusive, transferable, sub-licensable, royalty-free, worldwide license” to use, store, and share the content they post, which is fundamental to the platform’s operation. The ToS also typically includes a clause allowing Meta to update the terms unilaterally; continued use of the platform constitutes acceptance of the new terms.
Inside the Rulebook: Meta’s Community Standards
While the ToS provides the broad legal framework, the specific rules of conduct are detailed in Meta’s Community Standards. These standards apply across Meta’s platforms, including Facebook, Instagram, and Threads, and are the primary document moderators use to make enforcement decisions.
The standards are built around four core principles:
Authenticity: Prohibiting misrepresentation, fake accounts, and coordinated inauthentic behavior.
Safety: Removing content that could contribute to real-world harm, such as incitement to violence or publicizing crime.
Privacy: Protecting personal information and prohibiting the sharing of private data without consent.
Dignity: Barring harassment, bullying, and hate speech that degrades individuals or groups.
These principles are translated into detailed policies covering a wide range of content. The most common areas where users are found in violation include:
Violence and Incitement: This includes direct threats of violence against people or places, glorifying violent events, and content from designated “Dangerous Organizations and Individuals” such as terrorist groups and organized hate groups.
Hateful Conduct: This is one of the most complex and contentious areas. Meta defines hate speech as a direct attack on people based on their “protected characteristics,” which include race, ethnicity, national origin, religion, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability.
Bullying and Harassment: This policy targets content that is deliberately intended to degrade or shame private individuals, including unwanted sexual advances, attacks on physical appearance, and coordinated harassment campaigns.
Misinformation: While Meta doesn’t remove all false information, it has specific policies against misinformation that it deems likely to cause imminent physical harm. This includes false claims about public health crises (like vaccines during a pandemic) and content intended to suppress or interfere with voting.
Spam and Inauthentic Behavior: This covers a wide range of activities, from creating fake accounts and artificially boosting engagement (“engagement bait”) to running scams and deceptive marketing schemes.
Adult Nudity and Sexual Activity: Meta has detailed rules restricting sexually explicit content, though it makes exceptions for content posted in the context of breastfeeding, childbirth, health, or art.
Enforcement in Action: How Facebook Implements Its Rules
Meta enforces its Community Standards through a massive, hybrid system that combines technology and human review. The process typically begins in one of two ways: proactive detection by AI systems that are trained to recognize violating content (like graphic violence or spam), or reports from users who flag content they believe violates the rules.
Once content is flagged, it may be reviewed by a member of Meta’s global team of content moderators. These reviewers, who work in numerous languages and have local context, are trained to apply the Community Standards to make a final decision.
When a violation is confirmed, Meta applies a penalty. For most violations, the company uses a “strike” system, where consequences escalate with repeated offenses.
| Violation Severity Level | Example Violation Types | Typical Consequence (Strike Count) | Action Leading to Immediate Ban |
|---|---|---|---|
| Low | Spam, Engagement Bait | Warning or 1st Strike | No |
| Medium | Bullying and Harassment | 1st – 5th Strike (escalating restrictions) | No, unless part of severe coordinated harm |
| High | Hate Speech, Violence and Incitement | 1st – 5th+ Strike (escalating restrictions) | Yes, if it involves a credible threat or designated dangerous organization |
| Severe / Zero-Tolerance | Terrorism, Child Sexual Exploitation, Human Trafficking | Not applicable | Yes, immediate and permanent account removal |
Note: This table is a simplified representation of Meta’s enforcement policies, which are complex and subject to change. The strike system generally involves a warning for the first violation, followed by escalating restrictions on creating content (e.g., 1 day, 3 days, 7 days, 30 days) for subsequent strikes. Severe violations bypass this system entirely.
This entire structure—from the ToS to the detailed standards and the enforcement pipeline—functions as a private legal system. A user being banned is not an issue of public law but a final judgment rendered by a private governance system they voluntarily joined.
The sheer scale of this operation, with billions of users and posts, makes it an inherently imperfect system. The reliance on AI and rapid human review means that context, satire, and nuance can be lost, leading to errors and decisions that feel arbitrary to the user. The Oversight Board’s frequent overturning of Meta’s initial decisions on complex cases is a testament to the systemic challenges of applying a universal rulebook at an inhuman scale.
The New Battleground: Editorial Discretion and State Regulation
The legal landscape governing social media is not static. In recent years, a new front has opened in the battle over content moderation. As platforms have become more assertive in enforcing their rules, particularly around political speech, some state governments have attempted to regulate their ability to do so. This has given rise to a powerful counter-argument: that platforms themselves have a First Amendment right to decide what speech they host.
A New First Amendment Claim: The Platform’s Right to Curate
In response to accusations of “censorship,” social media companies have argued that their content moderation practices are not censorship but rather a form of “editorial discretion” protected by the First Amendment. This argument frames platforms not as passive utilities but as expressive enterprises, akin to a newspaper or a bookstore.
The Supreme Court has long held that a newspaper editor’s choice of which articles or letters to the editor to publish is a form of speech. Forcing a newspaper to print something against its will is considered “compelled speech,” a practice the First Amendment strongly disfavors.
Platforms argue that their decisions to remove, downrank, or label content are modern equivalents of this editorial judgment. By curating the content on their sites, they are creating a particular kind of expressive environment, and this act of curation is itself a form of protected speech.
The States Strike Back: Texas, Florida, and NetChoice v. Paxton
This clash of legal theories came to a head with laws passed in Florida and Texas designed to limit platforms’ moderation abilities. Texas’s H.B. 20, for example, sought to prohibit large social media platforms (those with over 50 million active U.S. users) from removing content or users based on their “viewpoint.”
The states argued that these platforms function as “common carriers” or modern public squares and that government has an interest in ensuring residents have access to them for public discourse.
Tech industry trade groups, NetChoice and the Computer & Communications Industry Association (CCIA), immediately sued, arguing the laws were a form of unconstitutional compelled speech. The case, NetChoice, LLC v. Paxton, made its way to the Supreme Court.
On July 1, 2024, the Supreme Court issued a unanimous decision. The Court vacated the lower court rulings and sent the cases back for a more detailed, function-specific analysis. However, the majority opinion, written by Justice Elena Kagan, delivered a strong affirmation of the platforms’ core argument.
The Court held that a platform’s curation of content—compiling and arranging the speech of others into an expressive product like a news feed—is an activity protected by the First Amendment. Justice Kagan wrote that it is “no job for government to decide what counts as the right balance of private expression—to ‘un-bias’ what it thinks biased, rather than to leave such judgments to speakers and their audiences.”
This ruling reveals that the First Amendment is a double-edged sword in this debate. While users often invoke it to claim a right to speak on platforms, the courts are finding that platforms have a more powerful First Amendment right to curate and to not be forced to host speech they find objectionable.
The First Amendment, therefore, acts as a significant shield for platforms against government regulation of their content moderation policies.
The Government Coercion Wrinkle
There is one remaining, though challenging, avenue for applying the First Amendment to a platform’s moderation decision: government coercion. If a private company like Facebook removes content not because of its own independent application of its policies, but because it has been coerced or significantly encouraged to do so by government officials, its action may be considered “state action.”
This practice, sometimes called “jawboning,” involves government officials using the threat of adverse regulatory action or punishment to pressure private companies into suppressing speech. Recent lawsuits have alleged this very scenario, with plaintiffs claiming that platforms removed content related to elections or vaccines due to pressure from members of Congress or federal agencies.
However, courts have set a high bar for proving such claims. They have distinguished between permissible government “attempts to convince” and unconstitutional “attempts to coerce.” Mere public criticism or calls for regulation from politicians have generally been found insufficient; a plaintiff must show a credible threat of punitive action that effectively transformed the platform’s private choice into a government mandate.
The NetChoice decision, while protecting expressive curation, did leave the door open for other forms of regulation. The Court’s instruction to the lower courts to conduct a more granular, “as-applied” analysis of specific platform functions suggests a potential path forward for regulation.
While a state may not be able to regulate the content of a curated news feed (an expressive act), it might have more leeway to regulate non-expressive functions, such as e-commerce, direct messaging services, or account verification processes, under a different legal theory of regulating commercial “conduct” rather than “speech.”
A System of Self-Governance: The Meta Oversight Board
Faced with immense power over global speech and increasing pressure from governments and the public, Meta has embarked on an unprecedented experiment in corporate self-governance: the Oversight Board. This body represents an attempt to introduce a layer of independent accountability and procedural fairness into its vast and often opaque content moderation system.
An Independent Check on Power
The Oversight Board was established in 2020 as an independent body to review some of Meta’s most difficult and significant content moderation decisions. Its stated mission is to protect freedom of expression and other human rights by providing an independent check on Meta’s enforcement of its Community Standards.
The Board is composed of experts from around the world with backgrounds in law, journalism, human rights, and technology, and it is funded by an independent trust to ensure its operational autonomy from Meta.
The process works as a form of high-level appeal. After a user has exhausted Meta’s internal appeals process for a piece of content that was removed (or left up), they can submit their case to the Board. Meta can also refer cases directly. The Board selects a small number of cases that it deems globally significant or important for setting precedent.
For each case, the Board invites public comments from experts and interested parties to gather a wide range of perspectives. A panel of board members then deliberates and votes on a decision.
The Board’s decisions on individual pieces of content are binding on Meta, meaning the company must restore or remove the content as directed, unless doing so would violate the law. In addition to these case decisions, the Board also issues non-binding policy recommendations to Meta, urging the company to clarify or change its underlying rules to be more transparent, consistent, and respectful of human rights.
Notable Decisions and Their Impact
Although the Board reviews only a tiny fraction of the millions of content decisions Meta makes daily, its rulings have had a significant impact on the company’s policies and approach to moderation.
The Suspension of Donald Trump: In its most high-profile case, the Board upheld Meta’s decision to suspend former President Donald Trump’s accounts following the January 6th Capitol riot. However, it sharply criticized Meta for imposing an “indefinite” suspension, which was not based on any existing policy.
The Board ordered Meta to review the case and develop a clear, time-bound policy for how it handles the accounts of influential political leaders during times of civil unrest. This forced Meta to create new protocols and be more transparent about its rules for world leaders.
Context in Hate Speech and Incitement: Many of the Board’s decisions have forced Meta to consider context that its initial reviews missed. It has overturned decisions to remove content critical of immigration policies, arguing for greater protection of political speech even when it is offensive.
It has ordered the restoration of a drag artist’s video that used a term designated as a slur, recognizing that the term was being used in a reclaimed, self-referential way allowed by Meta’s own policies. It has also analyzed the use of symbols with dual meanings, such as those used by hate groups but which also have other cultural uses, pushing for more nuanced enforcement.
Revisiting Misinformation Policies: In a key policy advisory opinion, the Board examined Meta’s policy of removing certain COVID-19 misinformation. It recommended that Meta move away from removing such content and instead use less restrictive measures like labeling and downranking, arguing this approach would better protect free expression while still addressing potential harm.
The creation of the Oversight Board can be seen as a sophisticated governance strategy. By establishing an external body with a transparent process, Meta is attempting to build legitimacy for its private system of rule-making. It outsources its most difficult and controversial decisions to a body that appears independent, thereby deflecting criticism that its power is absolute and arbitrary.
The Board’s detailed, case-by-case rulings also consistently highlight a fundamental tension: the inadequacy of a single, universal set of rules to account for the vast diversity of local contexts, languages, and political situations across a global platform. In doing so, the Board continuously exposes the inherent limitations of content moderation at scale.
Our articles make government information more accessible. Please consult a qualified professional for financial, legal, or health advice specific to your circumstances.