What To Do About Misinformation in the Upcoming Election Cycle?

We are 14 months out from our first ‘AI’ presidential election. Everyone knows that this is going to be a misinformation disaster.

How bad will it be? This cycle is going to make 2016’s issues with “Russian bot farms” look downright quaint. Unlike the techno-fever-dream of the risk posed by Artificial General Intelligence, even today’s “crude” AI is a clear and present danger in its ability to destroy trust in the system and create chaos.

The question is what to do about it.

What is likely to happen is that we build this era’s equivalent of a “Maginot Line”—enacting costly and destructive regulatory changes that are designed to fight last generation’s information war, rather than this one.

My bet is that this will likely occur through an effort to roll back section 230 of the Communications Decency Act and make social networks liable for misinformation on their platforms. That will force political speech off of social media entirely and into the shadows of private messaging, where it is harder to track and where we will run up against encryption wars.

That might sound better at first blush, but it is actually much, much worse.

Speech kicked off of social networks will not simply go away. Instead, those behind that speech will gravitate towards deeper private spaces, such as secret groups and Messenger threads, behind the veil of encryption where the speech isn’t seen, but can be even more extreme and more damaging.

The question will then become, will this generation of political and tech leaders let AI serve as the justification to dismantle encrypted digital speech? Will we end up sacrificing critical and very important freedom of speech because of an overreaction to a single election cycle?

There is another option. Instead of going after the convenient but misguided section 230 target, we could focus on the only sustainable answer—teaching modern civics and the need to rely on strong trusted voices in communities.

People need to understand that unless they know someone, they cannot trust them as real—and that wakeup call needs to happen now.

Further, it is critical that we actively build a large community of well-known, high-trust voices committed to upholding truth and reality. We could focus on empowering those people in a community with tools and fact checkers and resources to know what to trust.

The Obvious AI Misinformation Problem

Hopefully any reader of The Information at this point understands that what AI does is drop the cost of generating believable misinformation by several orders of magnitude.

When you are using a “Russian bot farm” to generate misinformation and chaos, you can only have so much impact. You can only produce so much content, which limits the scope and specificity of what you do. The bot content also is hard to perfect. For every believable post they produced, there were many that were not.

This all changes in today’s world of AI. The cost of producing believable but fake information drops to zero. It becomes possible for “bots” to have millions of conversations simultaneously that are personalized, but all of which drive an agenda or goal (something hard for even sophisticated people to do).

So, whereas with last era’s technology you might be able to have a bot farm run by humans operate some Facebook groups and make some viral posts, today’s AI technology allows you to have bots individually or in tiny groups cultivating people and their beliefs, connecting them to each other—and convincing them to take real world action.

This would be bad enough if there were one, or two, or 10 platforms in the world with the sophistication to do this. The problem we face in the coming months is that AI is going to be everywhere with sufficient quality that it will be impossible to contain. The basic AI you need to produce all the disinformation you want will be in your pocket, on your phone.

Section 230 - Our ‘Maginot Line’—The Bad But Likely Political Direction

So, if AI misinformation tools are going to be so widely distributed that there is no way you can control their use, what can be done to limit the spread of this type of content and deceit?

Well, the social media platforms that distribute speech can be told “you must stop misinformation from AI —this is your problem.”

Now, this would be entirely unfair, but for a cynical and pragmatic person, it is doable. And the way to do it is to get rid of Section 230, which protects platforms from liability for the speech that occurs on their platform.

What do the social media platforms then have to do? They can’t possibly try to sort fact from AI fiction on their platforms. They don’t have the tools or tech. They could try to throw a million content moderators at the problem, and still there is no chance they could succeed.

So their only move is then to just completely stop political speech in public on their platforms. Just turn it off.

Eight years ago, banning political speech from a platform would sound crazy. The goal of the platforms and their value was to enable people to connect with each other and share. The vision was a modern digital commons for discussion and debate.

But many of the platforms have now come to the conclusion that as media companies they can’t support real discourse. It is too charged, too fraught, and it's much better to stick to fun photos and cat memes that are unobjectionable.

Some people might think, great—the old world of CNN and newspaper news was better anyway, let’s just roll back the clock to that old model of the world.

But that is not how things would play out. Instead, the speech would be forced underground into private messaging channels.

To some extent, we as a society are already there. While we focus on what shows up in public Facebook feeds, Instagram and Twitter, that is just the tip of the iceberg of digital content shared by people online.

Most content—the real content—is already in group Messenger threads you can’t see as an outsider, and since you don’t get to see the content, it doesn’t get attention (this is how iMessage, which has tons of terrible stuff on it, gets away with no content moderation issues).

But once the political content gets erased from the mainline public social spaces, you better believe that it will explode in private spaces at scale.

For instance, if you want to know what is really going on in the Hamas-Israel conflict, the answers are on private Discord servers, although what is currently on the public part of the social web is fine for most people. If you get rid of that public part, though, everyone will go find the private servers.

This is where you get into real trouble. Then you have a problem trading privacy and encryption for the ability to control AI-driven content at scale.

So which do you want? Are you willing to sacrifice your right to encrypted private speech in return for moderating AI-driven misinformation? That is beyond a hard trade. In fact, it is a bum deal and a scary one.

Preventing Americans from privately chatting with each other would be a bridge too far. If we go down this road, we will eventually face real censorship and real authoritarian ends sooner than later.

The Right Solution—Civics and Community

So, if that isn’t the solution, then what is?

In theory you could, this month, tell all the AI companies that they need to throw out their models and rebuild from scratch, getting opt-in from content owners they steal from to construct their models.  That would set them back long enough to buy time for better decisions.

You could put really strong oversight or controls on AI companies and what they share, although sadly the cat is already out of the bag there. And even if it weren’t, there is no way you can prevent others from building models overseas without oversight. It might be more doable if only the most cutting-edge LLM models were useful for misinformation, but we are long past that point. Really basic stuff is really powerful if you want to be really evil.

The only solution isn’t sexy—it is civics, including media literacy and community.

Don’t roll your eyes. I am not in the camp of people who believe folks are so uneducated and unsophisticated they fall for anything. I believe that instead people mostly lack meaning, purpose and community—and when they find that with something like Flat Earth they dive in. And when they dive in, it is fun to convince themselves it is true. So the incentives to believe crazy things are intense (it isn’t just a stupidity or literacy issue).

That said, we are entering a new world where people have to start assuming that anything they see is, by default, fantasy until proven otherwise.

This really sucks. It ironically will give credence to what crazy people have been saying for a long time (the moon landing photos were fake!). But while those pictures obviously weren’t, and the conspiracy folks of the last many years have been just that, going forward when it comes to what you see and hear, starting with a conspiratorial thesis if you don’t know who it is from will be basically correct.

We have lived through a golden time when truth could come from anywhere with “evidence”—and “evidence” could speak for itself. We are heading away from that rapidly to the way it was for most of human history, and people need to understand that.

Community

So then, if you can’t trust anything you see or hear any more for anything other than entertainment purposes, who do you trust and how do those people stay trustworthy together?

I don’t think it is going to be institutions that we build trust with. Those days feel over for the foreseeable future. And it is definitely not politicians, for whom the game of partisan politics is unfortunately their job.

But people trust individuals. And in the internet age, they trust different celebrities or personalities that they feel are their “friends” sitting on Instagram in their pockets.

We need to get a diversity of those voices, each with their community, across the aisle and ideological spectrum, to build relationships with each other in a trusted “influencer” community.

We need to host conferences, do “trust falls,” and build tools for them such that they can hold the country together.

Practically, what does this look like? Maybe it is like a Young Presidents Organization for a diverse set of influencers. Perhaps it starts with David Beckham or Anna Wintour or Jay-Z pulling a group together.

But we need to re-establish trusted communities, Rotary Clubs, and lodges. And given the urgency to provide this stability—there is only 14 months before the next presidential election—we need to start by being able to say “you can trust this group of people with large followings to be on the same page and tell the truth.”

I know this sounds weak and amorphous. But it is the direction things need to go. The world has always operated with trust networks, long before we had photos and “citizen journalists.” That is what we need to get back to, with a new internet-realistic network topology.

That network is what we need to be investing in, not monkeying with section 230 to win points and push speech underground.

Conclusion & Further Reading

The funny or sad part of all of this is that the place we are today was easy to anticipate. I and others have been writing columns in the lead up to this moment for years. If you want to read more, look into the following columns, most of which are from 2016-2018 era (when this set of issues were last top of mind for folks).

How the Internet Broke and What to Do About It

The Future of Free Speech

The Future of Privacy

The Challenges Facing Free Speech

In Defense of Deep Fakes

Free Speech and Democracy in the Age of Micro-Targeting

The Slippery Slope to Censored Speech