While you can quibble with the details, the social platforms did the right thing in initially limiting the distribution of last week’s New York Post article on Hunter Biden.
As others have noted, in the wake of the 2016 election, platforms like Facebook and Twitter were forced to make certain commitments to increasing moderation of content. While both companies handled some elements of the moderation rollout inelegantly, in limiting publication of the Post’s dubious article they lived up to those promises.
The bigger significance of the incident, however, will likely be as a watershed moment demonstrating the Faustian bargain society has made by forcing social media services to take on a fact-checking and editorial function.
What might have seemed to many a broadly good idea when applied to Russian bot farms and fringe groups gets far more complicated when dealing with legitimate American publications.
And by forcing social media platforms to take on a fact-checking role, we as a society have opened the door to long-term government oversight and censorship of speech at a scale previously unimaginable in human civilization.
The point at which we could have chosen another path is long past.
We could have decided to regulate social media platforms as we do neutral utilities, recognizing their unique centralized power and forcing them to relinquish the normal editorial control that private companies traditionally enjoy over their private spaces.
Instead, we have opted to treat social platforms like newspapers, pressing them to act as editors. Now that we have gone down that road, and our platforms are not just seen as policing the fringe but also legitimate publications, it is hard to see a way back.
The question now is, where can we draw hard lines to defend free speech, free memory and free thought?
Bring Back ‘Most Recent’: Create of a New Generation of Deterministic Feeds With Explicit Publisher and User Rights
In the early days of the Facebook news feed, Twitter and Instagram, they all offered deterministic “most recent” feeds, where a user would see all the things posted by the people and pages they followed.
These views proved to be overwhelming and they also didn’t do a particularly good job of surfacing what viewers actually wanted to see. That’s why black-box algorithmic ranking grew in popularity to the point of complete dominance. Ranked feeds explicitly put social media platforms in the role of selecting what people see—rather than simply delivering users’ posts to those who asked for them.
In today’s increasingly pressurized social, legal and political world, it is time to—at a minimum—bring back deterministic feeds as an option for those that want social networks to funtion as a pass-through. It likely makes sense to also give users more-explicit controls to specify what content they want to see in alternative feeds.
When a platform delivers its own ranked feed of content, and makes its own decisions about what to show or not show, taking on a broader editorial function is sensible. However, in an alternative deterministic space where users choose exactly what they want to see, the social platforms would be bound to deliver in violation of promises they made to the user.
In this world, the editorial function of social platforms could reasonably be limited to the curated feeds that the platforms explicitly take control of. Those users who want could additionally use the unedited alternative deterministic or customized feeds where they control the ranking rather than allowing the social platforms to do it for them.
New platforms should consider adding this option from the outset. Existing platforms should consider adding back such an option, which would allow them to exert even more editorial control over their in-house ranked feeds and user-enable spaces they do not control.
Ultimately, if the house-ranked feeds are forced to act more like editors, then social platforms need to create other options where they faithfully deliver content without interruption or manipulation.
Legally Protect Private Spaces: Create Legal Frameworks to Prevent the Expansion of Editorial Scope to Private Groups and Messaging
Part of the great irony of the hand-wringing surrounding the New York Post incident, of course, is that the story went absolutely everywhere despite the fact that Twitter and Facebook took measures to limit its distribution.
This is because the vast majority of human communication doesn’t happen in public posts. It happens in private message threads and in groups beyond feeds and beyond the visibility of the popular media. So, while people today like to yell and argue about public news feeds because they can see them, they are focusing on the wrong thing.
Limits don’t stop speech, they just send it further underground.
As people come to realize that distribution happens more in these private contexts at internet scale, there will be enormous pressure to move the focus of control beyond the spaces viewed as public into the spaces viewed as private. These private spaces sit on common infrastructure, which theoretically allows them to be monitored, fact-checked and censored.
In other words, the mixture of public and private spaces on the internet creates a massive amount of risk for the long-term future of free speech, memory and thought.
The policy solution is to legally fortify existing private contexts against the oncoming assault. This means giving group administrators stronger legal control, authority and liability surrounding the spaces they build and the communities they attract.
Platforms should give people explicit rights over the private spaces they create and use—and contractually bind themselves to deliver all content, without modification, that people post in these spaces. They should also deeply respect the privacy of these spaces.
Making this shift likely means that group administrators and those who participate in private conversations have to take on more liability.
You don’t want the platforms to be liable for what is said in private contexts, because that means they will have to use the tools at their disposal to monitor and manipulate speech.
If someone needs to be responsible, it should be the people, who should be a far stronger counterweight to government expansion of control than a handful of companies could ever be alone.
So it is ultimately in the interest of everyone for the platforms to shift their legal framework to push more rights (and responsibilities) away from themselves and onto their users.
Invest in Encryption, Decentralization and Other Technical Privacy Efforts
Of course, policy solutions to protect speech can only go so far. The real answer has to be an evolution of technological infrastructure to make sure people maintain the right to speak freely.
The first answer to this is encryption, which we already have. Apple’s firm pro-encryption stance in apps like iMessage are laudable, and have turned out to be politically very savvy as well.
Because the company doesn’t host any sort of public feed itself, and the content that flows through its network is fully encrypted, Apple is not currently part of the conversation about speech, control and censorship. That’s despite the fact that Apple hosts an enormous amount of discussion on its rails. The New York Post article was undoubtedly shared widely on their platform.
Facebook’s movement toward encrypting private messaging and getting its messenger services to the same state as Apple’s is equally critical. The task is a huge lift and is not clearly good for the finances of the company in the short-term to middle range, but it is critical to defending free speech.
That said, even with stronger encryption, any large enough pool of private communication offers a huge incentive to build back doors or otherwise modify things that allow oversight and manipulation of content.
The only true technological answer is to couple encryption with decentralization (which of course touches on the crypto world). This is a critical direction for those who value free speech.
The problem with all this privacy tech is that it is less convenient for users, harder to build and less profitable than more centralized messaging solutions.
Encrypting systems limits the functionality your services can offer for core messaging, because diistributed systems are fundamentally more expensive and slower than their centralized kin.
This reality is going to be hard to overcome, and it’s why private messaging is in such a precarious position.
Privacy and freedom are in a sense a public good facing a tragedy of the commons. And if privacy-rich distributed systems are only used for questionable or challenging material, they will be large targets for disruption.
I am not sure there is a way around this problem. I don’t think the market will move toward privacy-rich technology. Nonetheless, I very much hope those who are committed to privacy continue to push in this direction.
Evolve the Cultural Expectations Around Truth and Develop Strong Forms of Trusted Digital Identity
We live in a moment of crisis when it comes to truth. Up to now, humanity has enjoyed a long period where hard-to-fake recorded media (audio, photos, video) has allowed truth to come from almost anywhere.
In the big picture, with the rise of increasingly convincing deepfakes and the ability for anyone to manipulate media or create fantasy that is as believable as reality, the future of truth has to be based on networks of trust.
While we are having trouble digesting this reality as a culture, the default assumption has to be that information is adversarial. We have to learn to function well in an environment where by default you don’t trust anything, but you build a network of trusted sources over your lifetime.
What this means is that people and brands can build trusted identities over the long term, as well as networks they can rely upon to understand reality and make decisions.
But if we end up in a world where social networks try to play this role, we are in for a world of hurt. It isn’t just because social networks cannot possibly fact-check at scale properly, or because they offer a major consolidated target for manipulation by minorities and majorities alike. It is also that those attempts to fact-check make people far more susceptible to believing lies, because they will lull people into a false sense of trust. When a social media platform turns down a questionable New York Post article, that implicitly teaches people that if you do see a New York Post article on a social platform, it is more likely to be true.
What we need instead is strong digital identity, where we can trust that the person or company represented as speaking is indeed the author of that speech. We also need to create a true marketplace of ideas where those who lie are punished and not believed, and those who turn out to be consistently trustworthy thrive.
We are a long way from this. To get there, we probably need digital national identity at scale. We also need trustable digital memory, which means moving away from ephemeral content where the record of what people say disappears. We also need frameworks for a true digital reputation—which flies in the face of current policy under rules like the General Data Protection Regulation, which tells people they own their data and identity.
But trusted, irrevocable, unfalsifiable, distributed digital identity is the only path forward that doesn’t create massive risk for society in the long term.
Conclusion: You Can’t Separate ‘Freedom of Speech’ and ‘Freedom of Reach’
It is popular in some technology circles to say that you believe in freedom of speech but not freedom of reach—you should be able to say what you want online, but you have no right to have social services distribute and magnify what you say.
This is a clever turn of phrase, but it’s completely wrong.
Freedom of speech isn’t the right to say what you want in an empty forest. Freedom of speech must fundamentally include the freedom to hear the voice of whomever you want to.
We have always had laws and cultural norms about what can be said in public, but until now it has never been even theoretically possible to control private speech—which has meant that it has always been fundamentally free. So, given the potential completeness of speech regulation, the stakes have never been higher. The decisions we make now will reverberate throughout history.
The Post incident should make it abundantly clear to all that in the case of ranked feeds, the storyline has been set. They will be fact-checked, in line with social media firms’ commitments after the last election.
For those who believe in free speech, it has never been more important to defend the space around ranked feeds, making it resistant to encroachment. That means we need to create deterministic spaces that are explicitly not fact-checked and edited. We also need to set up legal frameworks that guarantee people the right to have their content distributed and to hear from the voices they choose. And we need to pursue the fundamental technological and cultural shifts required to preserve freedom in the future.