For the last century we have had the luxury of being able to rely on photos, video and audio for a recorded understanding of truth. But for most of human history, trust and truth were far more complicated. World events couldn’t be directly observed.
Surprisingly, after just a brief period of being able to trust recordings we find ourselves on a clear path back in time. Technology, particularly artificial intelligence, is taking away our ability to directly observe truth. It is becoming as easy to create fake but believable images, video, and even audio as it is to create text with false claims today.
In this respect, I oddly agree with Elon Musk. I don’t buy his grandstanding about how AI poses a mortal threat to humanity as a sentient or more powerful intelligence. I do believe, however, that AI is increasingly effective at generating highly believable lies. The ease and effectiveness with which we will be able to lie to each other is going to tear at the fabric of our global society in deeply disruptive ways.
The Not Too Distant Future: A World Awash of Fake Video, Audio and Photos
In the last several months videos have been floating around the web showing how video and audio can be easily manipulated to make it appear that almost anyone is saying almost anything. A good recent example, from University of Washington professors, uses former President Barack Obama’s speech patterns to demonstrate one form of the technology. Another good demonstration, from Stanford last year, shows face expression-matching research.
In a sense, this isn't new. For a long time, Hollywood studios have been creating believable fakes, bringing back deceased actors or grafting live actors onto increasingly expressive cartoon scaffoldings.
Much like weapons proliferation, however, the ability to create believable fakes isn’t too scary when it is extremely time consuming and expensive to access. It is far more dangerous, however, when the technology gets good enough that almost anyone can generate believable fake media with access to the internet, hundreds of dollars of equipment, and a few hours. That’s the direction in which we are marching.
We already have some good indications of what this is going to feel like as a consumer. For many years, fake text has been easy to generate and distribute on the internet, with humans writing some content, and bots writing other content. The result is that the open web is largely discredited and un-usable at this point. It is impossible to know what is true or not, which explains at least partially the dramatic fall of the open web and rise of walled gardens.
Perfect Lies and Discredited Reality
It would be one thing if the world evolved in a direction where there were massive amounts of fake but highly believable content, but it was still easy to tell what was truly true. Then this would just be a spam problem. But sadly, one of the things that modern AI techniques is best at is using massive amounts of context to make unreality fully believable.
If you want to viscerally understand this, watch this video demonstrating neural network based photo editing.
Rather than traditional pixel-by-pixel image adjustments, the video demonstrates how using a generative adversarial network-based approach you can make a photo editing system that takes a small amount of input on how you want to manipulate an image, and the system can fill in the rest to make a whole image believable. Rather than needing to perfectly create a fake beard or a different hairstyle, all you need to do is suggest a few pixels of what you want. The system can use knowledge of a large set of real images to back-solve for believability.
In the not-too-distant future, using a data set of real faces (or real video, real audio and speech patterns) you will be able to choose the edit you want to make to a piece of media. Machines will fill in everything you would need to make your content believable.
Fake images aren’t new. In the fashion industry today no one trusts the reality of imagery anymore. Even on a consumer level, photo filters and simple retouch tools have made images more about emotion and expression than reality.
But we are rapidly approaching a world where no images can be believed in and of themselves, and the edits people make aren’t simply fashion touchups.
Tools for The Future of Truth
There are effectively three problems that we need to solve. The first is identifying what primary sources to trust. The second is extending those networks of trust beyond people and sources we immediately know. The third is how to trust the messaging infrastructure to not corrupt or alter the information we share.
Identifying primary sources to trust is in some ways the easiest problem to solve.
Capitalism is—at its best—a technology of truth. For centuries powerful families and royalty have paid agents all over the globe for accurate information. If you wanted an accurate view of reality, you cultivated and paid a set of people to verify the truth to the best of their abilities. Even if you weren’t a Rothchild with your own private sources, there were more accessible journals and newsletters you could buy, and trust were aligned with reality because their value depended on it. It is simple and it works.
In the last 100 years, we frankly have become lazy about this. A world of recorded media —which temporarily made truth easier to understand—has depressed our willingness to pay for trustable information. The emergence of modern advertising created an alternate and seductive framework for paying for information, at least in the short term.
But as the world realigns and truth becomes harder to discern, the simple but pure idea of paying agents for an understanding of truth once again becomes important. You will get what you pay for.
Extending Trust Through Social Networks
People love to complain about the spam, scams and lies on things like Facebook. But the reality is that for most people, Facebook provides a far more trustable source of reality than the open web. Using people you trust as the ultimate input to screen for reality is the way trust has worked historically, and digital tools that work along those same lines are critical. If the network of people you listen to is good, your information is good.
Of course, the experience is only as good as the people and things you connect to. Social networking in the digital world, just as it is in the physical world, is a double-edge sword. It is just social infrastructure, so if you are connected to people who aren't trustworthy or want to use the tool in manipulative ways, it will damage your view of reality. And we are still figuring out how to know whom to trust online (particularly if you are connecting to people you don’t know in the real world).
Modern social networks also are no longer just the network itself. They have an incentive to want people to share more, and for what gets shared to be more engaging. So there is enormous pressure on the networks to make product design decisions that implicitly or explicitly shape opinions and affects the messages people want to share. This makes them a party to the information, not just the trust relationships, which is a very tenuous relationship at best.
Trusting the Messengers
Even if you have faith in a primary set of sources, and the sanctity of a social network of trust, you have to trust the technical systems in the middle that are delivering your messages. This is where debates over encryption, VPNs, blockchains and maybe someday quantum communication come in.
There was a brief period in time where it felt like—at least in the West—people trusted the sanctity of our messaging infrastructure. For instance, for generations the American people trusted that the mail system was private and secure, with steep legal penalties for opening mail not intended for them.
This is all, sadly, again coming to a head. More players are achieving the technical sophistication to snoop and modify messages passing over the internet, evidenced by things like the fact that China is now able to block images based on content within messenger applications.
There is some good news in things like the rise of blockchains. While the current application of blockchains might be simple registers of accounts and values, the idea of highly distributed databases is going to be important for the future of truth. They allow for a sense of consensus truth where, in theory, no single actor can manipulate or change the group’s sense of reality.
The bad news about blockchains is that, just like democracy, truth becomes what the majority of actors in the system believe to be true—not actual reality. The other bad news about blockchains, in their current form, is that while in theory they should be platforms for distributed truth, in practice there is a large incentive for the consolidation of power into a small number of hands who aggregate a lot of the compute power that makes them secure. Just like social networking, there is a tendency toward consolidation.
In the end, the future of trusting messengers in the middle of communications flows is going to have to be, first, “wires” secured with encryption—and maybe even someday quantum technology (though that will be both extremely difficult and highly controlled by governments). And second, tamper-proof databases, which probably resemble highly distributed databases spread out using blockchains.
The Tower of Babel
Personally I am very disquieted about my inability to know whom to trust these days. There are extremely important issues like global warming where polarized news sources with conflicted agendas make it hard to know whether we are facing issues on a 20- or 200-year basis. As an American, I still mostly trust the formal layers of communication I use. But if I lived in many countries on Earth I would no longer trust the network itself.
It is hard not to think of the story of the Tower of Babel when evaluating the current situation. For a while, the internet looked like a grand project bringing the world together and allowing us all to coordinate on important human-scale projects, like combatting global warming.
Just like in the story of the Tower of Babel, however, just as we started using a common language, something came along and scattered the languages and trust. The internet regionalized and centralized, people lost a sense of whom to trust, and in so doing a lot of the coordination power we once had.
I am still an optimist. With the right incentives for primary sources, strong social networking technology, and good encryption and distribution, I think the internet can still be a major leap forward for a global understanding of truth, trust and our ability to work together.
But it isn’t going to be easy. After a honeymoon century where we could trust primary images, video, and audio—and the sanctity of the internet—we are going to have to start working harder for the truth. Technology gives, and technology takes away.