Many people have asked me in the last few days what I thought of Mark Zuckerberg’s speech on “free expression,” since it is a topic I have focused on so much. I admire Mark’s commitment to free speech. He did an admirable job of clearly articulating both some of the nuanced challenges of free speech, and a rational framework for how Facebook approaches the issue—bearing in mind the broad and diverse global audience he is forced to address.
But there are a few key places where my view of the fundamental philosophical issue—and the right path forward—differ from what Zuckerberg articulated. While I agree that we face the legal, platform and social challenges he articulated, I think we face a set of challenges to free speech that are inherent to the technology itself.
Namely, because of technology we face a world of more “complete” enforcement of rules and laws, compression of historically independent layers of speech oversight, consolidation of gatekeepers and co-sponsors of speech. We face a difficult move from a world of trust-by-default to distrust-by-default.
Simply put, there is a deeper set of technology-driven issues we need to bear in mind when we as a society think of speech and “rule-setting” around expression.
The Broad Strokes of Zuckerberg’s Argument
I recommend people read Zuckerberg’s full address themselves, but for those who have not and are looking for a summary, here are the major points he makes:
Free speech is critical to a healthy and inclusive society. Over time in the U.S. our conception of free speech has widened, but in times of turmoil, like the ones we face today, there is frequently an impulse to pull back free speech. That is always a mistake.
The internet has generated for people a new set of realities or properties which have both very positive as well as deeply challenging impacts on our society. In particular, the internet has given a lot more people a voice, made information move faster and enables people to form types of communities that used to be impossible.
Broadly, with these changes, we are seeing the emergence, with social media, of a new “Fifth Estate” where people directly express their voice “without intermediates.”
The fundamental question is how you balance between free speech and speech that impinges on the rights and safety of others. Facebook’s responsibility is to remove content when it could cause physical danger as much as possible. It also has a responsibility to prevent the definition of what speech is dangerous from broadening beyond what is absolutely necessary.
In the quest to do this, there are specific things Facebook focuses on preventing, identifying them through machine-learning algorithms. Facebook is also focused on the veracity of identities—the voices that are speaking.
On the topic of political speech, Facebook does not believe in limiting speech from politicians (paid or unpaid), because the people should decide what is credible in a democracy, not companies. Zuckerberg believes that political ads, specifically, are an important part of voice and that banning them would favor incumbents. And even if you wanted to ban political ads, it is unclear where to draw the line.
On the topic of hate speech, Facebook takes down content that can lead to real world violence. Identifying what content could cross that line is hard to get right, and you have to be careful of unintended consequences.
Looking forward, there are three threats to free speech that we have to face. The first is a legal threat, as different societies and regimes set rules that challenge free speech (we don’t want to abdicate to the Chinese internet model).
The second is the danger of how centralized platforms choose to self-regulate. The third is cultural, as people give in to the impulse to restrict speech and enforce new norms.
Select Reactions to the Speech
While I am sure there are some extremists who would disagree, in conversations over the last few days the idea that free speech is critical to democracy and our values as a Western society seems alive and well (God help us if that were not the case).
Zuckerberg’s framing of what the internet has “changed” is slightly more controversial, though still not deeply so.
First, his concept that the internet has ushered in the rise of a social-media “Fifth Estate” may well be the most enduring part of his speech. While the term is not new (and it is a bit confusing since “the people” is generally seen as the Third Estate), it is a brilliant framing and elevation of the role of social media in our society.
But I believe it is a miscategorization to talk about the internet as giving people a “voice” rather than increasing distribution. People have always had a “voice.” It just didn’t historically carry very far alone, and it existed in the context of more localized communities (and those communities’ systems of reward and punishment).
As the old saying goes, even in the most repressive regimes “people have always had freedom of speech, just not necessarily freedom after speech.” As I will note later, this distinction is quite important.
You can also take issue with how he discussed the positive impact of ideas quickly spreading online—empowering fundraisers, ideas, businesses and movements.
It isn’t clear to me that the “speed” of the internet has deeply positive impacts on our world, though it is impossible to slow down communication once it is sped up. The problem with speed isn’t just misinformation (as Zuckerberg outlines). It is that we no longer have time to thoughtfully consider options and respond. The faster you drive in a car, the better your reaction time needs to be—and it isn’t clear that we are even close to good enough drivers to safely operate at the speed we now find ourselves moving.
I believe that history will judge us poorly on this. Even events like the Arab Spring—which were lauded at the time—will be looked back on as moments where a lot of damage was done because we as societies found ourselves “driving” much faster than our social reaction times could permit.
On Facebook’s Role in Speech
Unsurprisingly, most of the discussion among pundits about the speech has focused on the question of how and when Facebook intervenes in speech. On this topic I want to call out two specific things.
The first is about identity.
In the speech, Zuckerberg discussed how the solution to misinformation is to focus on the identity of the speaker, and to force people to stand behind their statements and be accountable. He discusses that Facebook should (and does) take on the role of making sure that accounts represent “real” people or entities: Facebook is removing billions of fake accounts a year.
I agree with this, but there is some sad irony in it. The idea of “valid trusted identity and real names” was the cornerstone of how Facebook worked, in its earliest days.
The original magic of Facebook when it launched was that, by building off of university-validated email addresses, it created a space for college students to feel safe using real names and photos and connecting to each other online for the first time. It is hard to remember back to those “old days” for most people. But in that era the internet was an untrusted and scary place where you would never use real names (remember “The Net” with Sandra Bullock).
Over time, the pressure to grow rapidly, add and grow pages for companies and other organizations, and extend to communities where there was not “strong identity” has led us to a different place. The trust and accountability of the real world dropped off, which is how it became possible to have billions of “fake” accounts created in 2019 which Facebook then had to try to remove. In many ways it seems the goal is now to recover what was powerful about the beginning.
I agree that focusing on real identity and accountability is going to be the path that can bring Facebook back to a trusted place. However, as I often point out, there are likely growth and engagement sacrifices that will need to be made to get there. For example, my favorite hobby horse is that you can’t have accountability with Snapchat-style disappearing messages.
The second point I want to call out is about paid political speech.
Zuckerberg’s framework around the importance of freedom of political speech—and freedom of paid political speech—has drawn a lot of attention. Many bloggers and pundits seem to believe that the real reason Facebook doesn’t want to limit political advertising is for economic reasons.
I don’t think that is true at all. I am confident that people can take Zuckerberg’s rationale for not limiting political speech at face value.
It is fundamental that it is up to the people in a democratic society to decide what is right. No technology, company or platform should get in the middle of deciding what political speech is permissible (or even what political speech is). The idea of banning political speech, and/or paid political speech, might make Facebook’s day-to-day operation significantly smoother. And it might even be good for the stock price. But it would be a massive net-negative for society.
Challenges of the Future
The three challenges that Zuckerberg outlines—legal, platform and social—are indeed serious challenges.
On the legal challenges of the future: While self-serving, the argument that the internet is turning toward Chinese platforms (like TikTok) with very different values and perspectives on speech is true. It’s also a reason that governments and people in the West should pause before taking action that weaken Western internet powers.
Right now, we basically have two internet “blocks”—Chinese and American. To the extent that Europe and other areas pull away with different regulations and fragment the Western internet, as is happening now, a vacuum will be left that will allow the Chinese framework to take over more of the world. The fact that it is in Facebook’s interest to make this argument doesn’t mean that the argument is wrong.
On the platform challenges of the future: this also is real. The problem is that small groups of people, not just the leaders, do indeed wield significant power over our highly consolidated speech platforms.
It is unclear how to fully solve this, but there are two directional answers. The first is to “throw away the keys” as much as possible with technologies like encryption. There are frameworks that allow companies to run big platforms but without the power to intervene in how the platforms are used or to modify the fundamentals of speech. The second, more abstract answer, is for companies to make policy decisions and commitments that would be expensive culturally (and ideally financially) to overturn in the future.
Finally, the social challenge to freedom of speech is the most fundamental. There is no question that today we live in a world where “freedom of speech” is more limited by our peers and friends than it is by any technology or law. Our world is increasingly and deeply self-censored to a shocking extent. This might be generational, but to Zuckerberg’s concept, the most important thing is believing that someone’s right to express themselves is more important than getting one’s own way.
Questions Around the Future of Technological Speech
At this moment where free speech is such a central topic, it is worth acknowledging some of the most important issues about free speech in the modern era that were left unsaid in Zuckerberg’s speech. That might have been because they are too abstract and nuanced for his broad audience, or perhaps because there really are few good answers about how to address them.
There are four things in particular to call out about how technology changes the fundamentals of how speech works.
First, technology dramatically increases the “completeness” of any laws we set about speech.
Historically, while societies could have all sorts of speech regulations, it was impossible for a society to monitor and regulate what their citizens were saying to each other on the ground, in person, in the back of bars, etc. Speech laws could exist for the public square, but were impossible to broadly enforce.
Moving into the future, any speech regulations or frameworks societies come up with can be nearly fully enforced since our entire communication stream exists in some technological format. A world where everything is recorded, within “earshot” of an Alexa or sight line of a Nest, is something we should be very nervous about, because there will be over time no escape valve for bad laws.
Further, never before have we been in a place where we could block speech rather than simply punish it after the fact. When you can only punish speech after the fact, any speaker can choose to ignore the law and face the consequences. But with newly possible pre-filtering of speech—essentially for the first time ever—we face far more extreme possible future outcomes we should fear.
Second, technology compresses the layers of speech oversight that used to independently function.
At the most basic layer of human biology and abilities, we have always had complete freedom of speech. We fundamentally have the power to say whatever we want. On top of that base reality, there has always been a patchwork of different systems and organizations which manage human speech in different settings. These range from what can technically be delivered on different mediums, government policies, different publications and venues that host speech to social norms in different communities.
The net effect of this historical patchwork is that different types of speech could exist freely in different spaces. What you could say in a certain church was very different than what you could say in a private home, or in a specific community.
Unfortunately, technology is consolidating and standardizing this historical patchwork of speech spaces and oversight because it is erasing space and time, and making everything searchable across all spaces. This will force a set of very hard conversations about speech as a whole versus speech in communities.
Third, technology has driven a consolidation of gatekeepers and co-sponsors for speech.
Historically you may have always had freedom in your individual voice, but your voice also didn’t carry very far alone.
In order for an idea to become broadly distributed, you needed to find spaces and communities to not only host your idea, but grow and distribute it along with you. You had to find a gatekeeper and/or co-sponsor (depending on how you see things) to enable your speech.
In a sense, those publications, public venues and groups that hosted your speech became directly tied to it, and both got the benefits of association, but also shouldered the risk of your speech along with you.
To be sure, in our modern technologically driven world, you still need collaborators in order to magnify your voice/speech. But what has changed is that those groups can be far more ad hoc than they were historically. The community that comes together to drive a type of speech can—obviously—be drawn from a global audience rather than a local one.
That makes much more extreme speech possible. If you live in the real world without technology and want to express an idea, you need to find people in your physical community to host your speech. Because that community is likely small and reasonably diverse, you have to say things that are acceptable. But with digital speech, you don’t have to be so moderate. You can cherry-pick from a global space of billions of people who will give you permission to do exactly what you want.
The cost of co-sponsorship is lower with digital speech. If you own a physical venue or publication, you have capital and reputation at risk when you host speech. When you have formed an inexpensive ad hoc community to promote your speech, there is no fixed risk you are taking. This changes the nature of discourse.
Fourth and finally, technology moves us from a world of trust-by-default to non-trust-by-default.
We have for the last few centuries been able to—largely by default—trust the people we interact with and the media we see.
The reason we could trust media was that—for at least the last few hundred years—it was very hard to convincingly falsify. This is obviously not the future we will live in. As I have written about before, we are going to have to go back to network-based-trust versus content-based trust.
The reason we could largely trust people was that we knew who they were, and would see them again or need to interact in the future. This has certainly changed with urbanization, etc., but if you think of a small town or neighborhood, the reason you can trust your neighbors is that you know where they live and life is an iterated game. This too is breaking down both as we globalize and as identity becomes relatively more fluid online.
We can’t fix the former. We are going to have to get used to a world where when we see something without context we by default do not believe it. This means we have to build better identity and reputation systems so we can trust specific people and relationships over the long term.
Conclusion
We are going to get through this period in history and come out stronger. But I think that we are going to have to confront some hard truths and make some painful choices along the way.
I am worried that regardless of the best intentions of company leaders, regulators and community members, the fundamental technology of the internet moves us inexorably in a direction of more speech regulation than is healthy for the long-term resilience of our civilizations globally.
Once something becomes possible, it takes an almost Herculean effort to not take advantage of it.
If the story of the last decade or so of technology is that speech monitoring and control is now possible in a new way, I fear that the only solution to preserve free speech is technological tools around distribution and encryption that would level the playing field.