Disclosure: This page may contain affiliate links to books or resources on free speech and censorship. If you purchase through these links, we may earn a commission at no extra cost to you.
Free speech in social media: why it matters even when it’s uncomfortable
You can tell a lot about a society by what you’re allowed to say out loud.
For centuries, the battles over free speech happened in town squares, pamphlets, newspapers, and universities. Today, they happen on timelines and comment threads. Whether you are the President of a country or a single parent posting after work, social media has become the most visible “public square” humanity has ever built.
That makes a hard question unavoidable:
Will the new public square honor robust free speech—even when it’s uncomfortable—or quietly decide which voices are allowed to exist?
This article makes the case that strong free‑speech norms on social media are essential, especially when speech is controversial or unpopular. We will also acknowledge the strongest arguments for tighter moderation and show how platforms can protect both safety and freedom without drifting into viewpoint censorship.
—
What free speech is (and isn’t)
In the United States, the First Amendment protects citizens from government punishment for most kinds of speech. Other free societies have similar protections. The basic idea is simple:
- The government cannot punish you just because your views are unpopular, offensive, or politically inconvenient.
- It can punish speech that crosses clear legal lines: true threats, incitement to imminent violence, fraud, and so on.
Social media companies are private entities, not governments. That means they are not legally bound by the First Amendment in the same way the state is.
But when private platforms become the primary channel for public discourse, their decisions start to feel very close to censorship, even if no government official signs the order.
So the question for platforms and infrastructure providers is not just:
“What are we legally allowed to do?”
It is also:
“What kind of society are we helping to create?”
When users have been led to expect a place to debate, organize, and stay connected, using that dependence to selectively silence lawful viewpoints—especially on one side of the political spectrum—is a deep betrayal of trust.
—
From town hall to timeline: why platforms now matter so much
In the past, if a newspaper refused to print your letter, you could start a newsletter, stand on a corner, or go door‑to‑door. It was hard, but your speech could not be instantly erased from all major channels at once.
Today, a handful of social platforms and infrastructure providers control:
- The distribution of your speech (who sees it and when)
- The monetization of your speech (whether you can earn a living from it)
- The records of your speech (which posts survive and which vanish)
When someone is banned, shadow‑banned, demonetized, or “fact‑checked” into invisibility, it is no longer a private conversation being cut short. It’s a central pillar of the modern public square being removed.
When whole communities migrate to alternative platforms—for example, people who felt silenced on mainstream networks, including conservatives and other dissidents—those alternatives can themselves be shut down by app stores or hosting providers. The Parler case is a vivid example: app store removals, then hosting termination, effectively wiped out a platform and its user base overnight.
In that environment, strong free‑speech norms are not a luxury. They are a condition for any genuine democracy or marketplace of ideas.
—
The case for strong moderation (steelman)
Before we talk about why free speech must be defended, we should honestly acknowledge why many people push for tighter moderation. A fair conversation requires steelmanning the other side.
Supporters of aggressive moderation and deplatforming often emphasize three main concerns:
1. Safety and harm reduction
Harassment, doxxing, and targeted abuse can ruin lives. False accusations can damage reputations. Calls for violence and terrorism clearly must be stopped.
2. Misinformation and confusion
False claims about elections, public health, or crime can mislead millions. Conspiracies can create distrust and social unrest. Many people worry that “letting everything spread” will destabilize societies.
3. Business and regulatory pressure
Advertisers do not want their products shown next to content they consider toxic. Journalists, activists, and sometimes governments pressure platforms to “do more” to stop harmful speech. Companies fear reputational damage or regulation if they don’t act.
These concerns are not imaginary. Serious harassment, incitement, fraud, and other clearly illegal content must be dealt with. Families and communities deserve safety.
The question is not whether platforms should act against genuinely illegal or violent content. The question is:
How far beyond that line should they go—and with how much power, secrecy, and bias?
—
The case for robust free speech (rebuttal)
Free speech is not primarily about protecting polite, widely accepted opinions. Those rarely need defending.
It is about protecting:
- Unpopular or dissident views
- Criticism of powerful institutions and leaders
- Debates over what is true and what is false
History is full of examples where those in power claimed to be protecting the public from “dangerous” ideas, only to silence political opponents, religious minorities, or whistleblowers. When gatekeepers decide which opinions are too dangerous to be heard, dissidents are usually the first to pay the price.
On modern platforms, viewpoint‑based censorship can show up as:
- Sudden bans of high‑profile figures (for example, the banning of President Trump’s accounts on Twitter and Facebook, which critics argue was unevenly enforced and set a dangerous precedent).
- The app store and hosting takedown of Parler, which effectively erased a competitor whose user base included many conservatives.
- Broad “misinformation” labels or removals applied in ways critics say consistently disfavor one side of the political spectrum.
- Heavy reliance on opaque “fact‑checking” organizations whose biases, funding, and accountability are often unclear.
Critics do not deny that some content is genuinely harmful or false. Their concern is that:
- Rules are applied selectively.
- Alleged misconduct by certain agencies or political figures is suppressed as “misinformation” before it can be debated.
- Ordinary users, including conservative voices or other dissidents, are afraid to share lawful opinions for fear of sudden punishment, shadow bans, or demonetization.
This chilling effect can be as powerful as explicit censorship.

Three lines that must never be blurred
To protect both safety and freedom, platforms should draw—and respect—three clear lines:
1. Illegal vs. lawful speech
– Clearly illegal: direct threats, incitement to imminent violence, terrorism recruitment, child exploitation, and other crimes.
– Lawful but controversial: political arguments, religious beliefs, criticism of governments or institutions, debates over data and claims, uncomfortable jokes or satire.
Platforms must act decisively on the first group. The second group must be protected, even when it offends.
2. Government pressure vs. independent policy
When government officials “suggest” that platforms suppress certain topics or accounts, critics argue this crosses into indirect, unconstitutional censorship. Platforms should resist and disclose such pressure, not quietly comply.
3. Transparent moderation vs. hidden manipulation
Visible rules, clear warnings, and an appeals process are one thing. Hidden shadow bans, secret blacklists, or quiet demonetization—without explanation—are another. Users deserve to know when and why their reach or revenue is being restricted.
Without clear boundaries, the temptation to quietly sideline disfavored views becomes overwhelming.
—
What this means for ordinary users
For everyday people, the stakes are practical and personal:
- You can lose your community overnight. Years of building followers and relationships can vanish with one unexplained decision.
- You can lose your livelihood. Demonetization of lawful content can wipe out income streams for creators and small businesses.
- You can lose your courage. When you see others punished for lawful opinions, you start self‑censoring. Whole topics become “dangerous” to mention.
This is not how a free people should have to live.
Whether you are conservative, progressive, libertarian, or something else entirely, you benefit from a system where everyone can argue their case, present evidence, and try to persuade others—without fearing that a small group of unaccountable moderators or outside “fact‑checkers” will erase them.
—
What this means for platform leaders
If you help run a social platform, host infrastructure, or manage an app store, you are no longer just a private business. You are a gatekeeper of the public square.
That comes with enormous responsibility. A principled approach to free speech should include at least three commitments:
1. Clear, narrow rules
Define illegal content precisely and act quickly against it. Avoid vague categories like “problematic” or “offensive” that invite abuse and selective enforcement.
2. Visible enforcement and appeals
Inform users when they’ve violated a rule, explain what was removed and why, and offer a realistic appeal path. Be willing to reverse mistakes and publish aggregate data about enforcement.
3. Restraint under pressure
Disclose government or political requests to censor content. Resist becoming an informal extension of state power or a tool for partisan agendas. When you control hosting or app distribution, be especially cautious about erasing entire platforms or communities.
These safeguards do not solve every problem. But they make it much harder for censorship to hide under the banner of “safety” or “misinformation.”
—
A path forward: defending free speech without defending evil
Some people hear “free speech” and assume you support whatever is being said. That is not true.
You can:
- Reject and condemn truly evil speech—calls for violence, racial hatred, terrorism—
- While still insisting that lawful, controversial, or unpopular opinions must be allowed to be spoken, debated, and criticized.
A healthy society does not protect speech because it is always wise or kind. It protects speech because:
- We are fallible and often wrong about what is true.
- Majorities and powerful institutions can be deeply mistaken.
- The best ideas and the worst ones need to be exposed to scrutiny in the open, not driven underground.
Social media just happens to be the place where that contest now happens.
—
Three actions you can take
You don’t have to run a platform to make a difference. Here are three realistic steps, depending on who you are:
1. If you are an ordinary user
– Talk openly—yet respectfully—about free speech with friends, family, and colleagues.
– When you see lawful opinions censored or throttled, ask public questions: “Which rule did this violate?” “Is this applied evenly across viewpoints?”
2. If you work inside a platform or tech company
– Share this article privately with your team and ask, “Where might we be going beyond safety into viewpoint censorship?”
– Advocate for clearer rules, better notices, and real appeals. Often, change begins with one person asking hard questions in the room.
3. If you are a policymaker, journalist, or civic leader
– Push for transparency: disclosure of government requests, enforcement statistics, and the identities and standards of “fact‑checking” partners.
– Defend the principle that lawful speech—even unpopular speech—should not be quietly erased by a handful of gatekeepers.
The test of a free society is not how it treats polite, popular views—but how it treats voices that powerful people dislike.
At smfree.org, we choose to defend those voices. We invite you to stand with us.
—
Series: Restoring Free Speech Online
Series: Restoring Free Speech Online
This article is part of a multi‑part series on social media, censorship, and principled free speech.
1. Free speech in social media: why it matters even when it’s uncomfortable
2. Case study: Trump’s Twitter and Facebook bans
3. Case study: The Parler takedown (Apple, Google, AWS)
4. Shadow banning and demonetization: how quiet censorship works
5. Who are the “fact checkers”? transparency, bias, and accountability
(Replace `#` with the actual URLs once each article is published.)
—
- Internal links to add later
– To the Trump case study, Parler case study, and “Checklist for social media leaders” article once published.

Leave a Reply
You must be logged in to post a comment.