The Slippery Slope to Censored Speech

Last week, once again, the topic of content moderation on social networks took center stage. Against the backdrop of explicitly divergent policy decisions by major social media platforms, and in the context of extremely high social and political tensions, the debate exploded on what role those platforms should take in moderating speech.

After watching events unfold throughout the week, I worry that—despite the best intentions and efforts of many—we are already far down the path to censorship of all digital speech under the banner of preventing harm.

If we aren’t careful, we are likely, within the next few decades, to find the integrity of public speech jeopardized, along with our ability to speak freely in private.

Let me explain why I am so worried and offer a few suggestions on what to do about it.

Our Current Dangerous Path to Censorship of Digital Speech

Historically societies have drawn a distinction between public and private speech. If you were looking to reach a lot of people, you had to do it in a mass public context with a single common message and broad oversight. If you just wanted to talk to a few people, that speech occurred in a private and free context.

This distinction between public and private has been very useful. It meant that societies could think about reasonable limits to free speech in the public context where it reached many. But at the same time citizens could be assured that their private conversations were nearly fully unregulated and truly free.

So the traditional First Amendment sense of free speech in the U.S. could have clear and reasonable exceptions around libel, incitement to violence and so on, limits that were enforced in public spaces. However, in practice, there were truly no limits on private exchange among trusted counterparts. Even under the most repressive regimes, which tried to organize thought police, trusted communities could have unlimited private conversations.

The internet brings two fundamental challenges to this way of thinking about different spaces for speech.

First, technology erases any meaningful distinction between public and private spaces, which makes it unusable as a natural separation between regulated and unregulated speech. Now, very little differentiates a large private message thread or group from a public forum. In a frictionless world, technically private and technically public spaces can have the same reach. That means the idea of having rules  that govern “public posting” becomes nonsensical, because saying something in a large message thread or group is no different from saying it on a social media platform. For more on this topic, you can read my 2016 column, “Free Speech and Democracy in the Age of Microtargeting.” 

Second, technology in theory gives platforms and the state an unimaginably vast new power to monitor and control not just public but private speech. Historically, private speech was guaranteed to be truly free because it was impossible for a platform to monitor and control it. A society could write any rule it wanted, but free speech would still exist.

Most people realize that technology turns this situation upside down.

We can now easily monitor all private conversations at scale (even in physical space, with today’s smart speakers), tracking violations of policies, redacting or blocking specific messages and more. This is a brave new world, and one in which any regulation can be enforced far more completely than was ever before imaginable.

The challenge we face today is that many people haven’t internalized the way technology has changed the speech landscape. They want platforms to apply standards around public internet speech the way the public sphere has historically been managed, and they don’t recognize how that will jeopardize historically unregulated private speech in the digital age.

They see some content in public they don’t like, and they want it handled either at a fundamental legal level or as a matter of platform policy or product experience.

The reaction is understandable given the old-world paradigm they are used to, but it is extremely dangerous in both the short and the long term.

In the short term, moderating public speech simply drives more speech away from the technically public forums into technically private but still huge and impactful spaces. It doesn’t stop the speech—the speakers just change tools. When people want to see content moderation on public content, not only are they missing the forest for the trees in terms of where actually dangerous speech is taking place, but they are explicitly pushing that speech into secretive spaces, where even fewer people can see—or trace—the content.

Longer term, people will come to realize that the speech they deem dangerous and don’t like occurs predominantly in private spaces—not the public sphere—and that those private spaces can be feasibly monitored and controlled.

This, unfortunately, is where things get really scary. With the expectation set that platforms or governments (or some combination) will moderate content, and technological tools available for moderating private spaces at scale, the pressure builds for total oversight of all speech, not just historically public speech. I think it will prove to be nearly impossible to hold a line between technically public and technically private spaces over time, once the mission of harm reduction at the platform level becomes entrenched. 

This is how pressure always plays out in the context of products. Once a theme is embraced and there are examples of it in a few places, it gets extended everywhere.

Some people might ask, what would be so bad about a platform or government regulating all speech—not just public speech?

The answer is that you have to expect at some point in the future bad people (or machines) will come to positions of power over the system. If these bad actors have complete control over human speech— something that has never happened before—they cannot be resisted. Giving a single entity power over all human speech isn’t a risk anyone should be willing to accept. For more on different possible paths and outcomes, check out “How the Internet Broke and What to Do About It” from 2018. 

What We Can Do About It, Starting Now

If you clearly see this narrative unfolding before your eyes, as I do, you’ll agree we have to pursue three paths immediately to avoid an otherwise inevitable crisis in the next 10 to 20 years.

  • The first path is product and technology oriented. The best theoretical answer to the challenge is to build fully encrypted, decentralized, identity-rich communications options as quickly as possible.

Beyond technical feasibility, there are two practical concerns to consider with this answer. First, for such a system to be viable and safe, it needs to be broadly used, and not just for illicit or challenging content. It is hard for these systems to compete with entrenched, centralized players in practice, because their security limits their performance and the commercial functionality required to compete for users. Second, a system like this would have its own serious dangers. All the current forms of public speech we look to limit today would be unlimited and—potentially even more scary—unobserved.

The second answer from a technology perspective is to rapidly evolve existing social and messaging platforms to use technology to protect free speech. If major social networks can successfully move to encrypt the communication in private spaces so that they themselves cannot even read the messages, that would be a huge win. It would make it impossible for them to regulate spaces deemed private.

It remains to be seen, however, if this will ultimately be possible. Most platforms have already set—with consumers and regulators—expectations of content moderation in what they define as public spaces. Once you have taken on that responsibility, it is hard to walk back from that toward encryption.

For the messaging products that already have end-to-end encryption, the question is whether they get dragged into the debate over these broader social issues when people realize that most of the bad content many people want moderated is on private services, not public ones.

Beyond encryption, the other major thing platforms need to do—and definitely can do—is strengthen identity and the vibrancy of open discussion in the context of content. While I strongly believe the platforms themselves should stay out of the content moderation game, I have no issue with allowing the community to contextualize and moderate public speech online—much as it does in the real world. But that requires trust in the identities of the people and organizations engaging in the conversation. If you can choose whom you want to listen to and whom you can ignore, there isn’t much of an issue. The existing platforms could vastly improve how participants with strong trusted identities could contribute to moderating spaces on their own and for their communities. I look forward to seeing rebuttals and commentary from the voices I most trust next to any questionable content I see.

In sum, when it comes to the technical and product improvements existing platforms should push forward on, the answer to bad speech is to enable even more speech backed by real identities. 

  • The second important angle to push for is the articulation of new, previously unexpressed global human rights.

To protect freedom of speech, we should articulate and formalize a human right to memory. I should as an individual be able to remember whatever I want and use digital tools as an extension of my own memory. At first blush, this may seem tangential to the question of free speech. But guaranteeing a right of memory (which amazingly enough some players in the digital landscape are chipping away at already) means we cannot ban ideas or concepts from any platform writ large. 

We should articulate a right to truly unlimited private conversation. This could be framed as a specific right to use strong encryption to say whatever you want with consenting adults. It could be framed as a new version of the First Amendment that strengthens freedom of speech in private and specifically allows for—as much as we might not like it—many of the forms of speech, from libel to incitement, that are currently carved out of the legal precedent around the amendment.

We should articulate a right of freedom to listen to whom you want to hear from, and potentially a right to turn off or mute anyone you don’t want to hear from. 

  • The last important action we really need to be heavily investing in is media literacy. Watching some younger internet users, I have a lot of hope on this front, because they are far more savvy consumers of speech on the internet than earlier generations. They understand how to read between the lines of posts; they do not put credence in anonymously posted content; they aren’t so gullible as to believe whatever is posted or take offense too easily. They understand the reality, which is that when you are all of a sudden exposed to all the voices on earth rather than just a few in your local community, you are going to see and experience a lot of crap, and that is OK.

The Counternarrative

I have generally found that people who reject this model—and want to add layers of moderation and control to digital speech—largely come to this conclusion honestly. They usually hold one of several beliefs that make them want the platforms to take action to prevent harm. 

Many believe that because the internet makes private speech so much more potent than it was in the past, treating private speech as truly free is no longer tenable and we should regulate it. Needless to say, I think their cost-benefit math is off and they are likely overvaluing harm reduction in the present relative to the long-term costs of what they propose.

Others believe speech itself shouldn’t be regulated. But if a platform engages in suggesting people to follow or plays a role in selecting which content to show me, that makes that platform a responsible party in determining the people I interact with and the content I see. These both feel like the wrong mental models to me.

In the context of suggestions, that would imply Google is responsible for the content of all search results I click on because it suggested them, or Doordash is responsible for the quality of my meal. Moving liability from the person making a choice to the person or organization making a suggestion would be a major abdication of personal responsibility and agency implicit in liberal society. 

The better model is to recognize that in most social systems users select what they want to see, and the service prioritizes in the context of their interests to manage the otherwise crushing amount of potential content. It might be slightly easier for people to understand if ranking services were independent personal utilities rather than part of big networks, but the end result is ultimately the same. A service reflecting the wishes of the user is not, in my mind, a party to what the user asked for.

All that said, on the topic of content ranking, I do wonder how much perception would shift if users were given more explicit optional controls over what they see. 

Finally, those that want to add layers of moderation tend to believe moderation can be applied consistently, accurately and fairly, at scale, in a way that doesn’t silence people and doesn’t have unintended consequences. 

Let’s just say this has not been not my practical experience of attempts to form policies in this space or of the ability of people or machines to implement that policy.

In fact, people that believe this tend to dramatically overestimate their abilities to make decisions for others and write good policies, and frequently miss the reality that in their attempts to help they cause more harm than good. Their intentions may be noble, but by focusing on platform-driven solutions, they actually disempower the end organizations and people that should be the center of healthy social engagement and change.

Shutting Pandora’s Box

I believe most people want to do good, want to prevent harm, and want to use the tools at their disposal to do so.

The problem, however, is that speech is by its nature a double-edged sword. It is so fundamental to who we are as people that it will always be at the heart of our best and worst moments. And we will always have divergent opinions about which of those moments were the worst versus the best.

So we have to accept that internet speech will cause harm as it causes good, just as speech always has. People and organizations using mediums should work hard to improve speech, but it is terribly dangerous to mess with the fundamental speech platforms—just as you wouldn’t want to distort the natural properties of air as a medium of real-world communication.

Ask yourself: If you could magically change the working of the physical world so that certain words and phrases were physically impossible to think or express, would you want to do that? Would you have the hubris to make decisions on that list of banned words and concepts for all of humanity, for all time going forward, yourself? Even if you were willing to make such a call, would you trust the thousands of people or the artificial intelligence needed to enforce your rules to do it right today, and to not alter the system in terrible ways in the future? 

I hope we don’t do the seemingly natural and easy thing—yielding to the temptation to use newly possible tools in an attempt to stamp out the bad parts of speech. We would end up destroying speech’s fundamental value by accident.

It is going to take an enormous amount of restraint to avoid gliding from where we are today into heavily moderating and regulating public and private speech, now that we have opened the Pandora’s box of limiting speech on social platforms and user expectations have begun to expand.