By Cliff Potts, CSO, and Editor-in-Chief of WPS News

Baybay City, Leyte, Philippines — March 4, 2026

For years, social media platforms treated blocking as a personal failure. Users were encouraged to “engage,” “mute,” or “report,” but rarely to draw hard boundaries. Blocking was framed as avoidance, fragility, or an admission that someone had “lost” an argument.

That framing served platforms, not users.

When Bluesky treated blocking as a normal, legitimate tool rather than a last resort, it shifted power back to individuals. The effect was immediate and measurable: harassment lost reach, bad actors lost audiences, and ordinary users stopped being pressured to tolerate abuse for the sake of engagement.

Blocking Is Boundary-Setting, Not Censorship

Blocking does not silence anyone. It simply removes access to a specific person’s attention. That distinction matters.

On most platforms, users are told that blocking is equivalent to censorship, even though no speech is restricted. The blocked account can still post freely. What they lose is proximity to someone who does not wish to hear them.

Bluesky rejected the false moral framing around blocking. Users were not required to justify their boundaries. They were not asked to prove harm. They were allowed to curate their own space without apology.

That normalization changed how people interacted.

Abuse Stops When It Loses an Audience

Harassment thrives on visibility. Trolls escalate when they are seen, quoted, and amplified. Blocking interrupts that feedback loop.

On Bluesky, blocking removed the reward structure that sustained abusive behavior. Without reactions, pile-ons, or algorithmic boosts, harassment became inefficient. Bad actors either adjusted their behavior or disappeared into smaller, self-contained circles.

This did not eliminate conflict. It eliminated performative cruelty.

Safety Without Performance

On many platforms, marginalized users are forced to publicly perform harm in order to be protected. Screenshots, explanations, and repeated exposure become prerequisites for action. Safety becomes conditional and exhausting.

Bluesky’s approach reduced that burden. Blocking did not require a spectacle. It did not demand emotional labor. Users could simply opt out.

For queer users in particular, this mattered. Safety no longer depended on being believed by a moderation team or validated by an audience. It depended on personal agency.

Blocking Changed Community Norms

When blocking is normalized, community behavior shifts. Users learn that access is not guaranteed. Speech may be free, but attention is not.

This discourages boundary-testing and encourages self-regulation. Conversations become more intentional. People who want to be heard adjust how they speak. People who do not adjust lose reach.

That is not punishment. It is consequence.

The Incentive Other Platforms Refused to Break

Large platforms resist strong blocking tools for one reason: conflict drives engagement. Every argument generates clicks, impressions, and data. Blocking reduces those metrics.

Bluesky accepted that trade-off. It chose user well-being over artificial activity.

In doing so, it demonstrated that healthier social spaces are not a mystery. They require platforms to stop treating abuse as a feature.

Blocking was not a secondary tool on Bluesky.
It was a statement about who the platform was built for.

For more social commentary, please see Occupy 2.5 at https://Occupy25.com

This essay will be archived as part of the ongoing WPS News Monthly Brief Series available through Amazon.

References (APA)

Citron, D. K. (2014). Hate Crimes in Cyberspace. Harvard University Press.
Gillespie, T. (2018). Custodians of the Internet. Yale University Press.
Massanari, A. (2017). #Gamergate and the fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346.
Noble, S. U. (2018). Algorithms of Oppression. New York University Press.


Discover more from WPS News

Subscribe to get the latest posts sent to your email.