In Defense of Deep Fakes

Nearly everyone in the technology community is worried about the near-term prospect of deep fakes destroying our trust in the media and our collective sense of reality.

That is understandable. When it becomes as easy to generate convincing fake media as it is to record reality, our ability to trust by default what we see and hear in the real world evaporates.  In this way the technology subverts our ability to function as a society.

That said, I think there is a far less popular but equally important position to consider, which is that the ability to generate deep fakes is a powerful technology that we as a society will want broadly available, not tightly held by just a few powerful institutions.

First, deep fake technology is critical to securing effective and inexpensive privacy in the future.

Second, if you believe that we will have to adopt a more adversarial distrust-by-default approach to information, rapidly and broadly distributing deep fake technology might be healthier than the alternative, and might make it easier for society to critically evaluate reality. The alternative is a painful, slow transition where we are not sure what can and cannot be trusted because of unevenly distributed technology.  

The more I consider it, the more it feels to me that the technology to generate deep fakes has some close similarities to encryption technology. While there are costs and challenges to the technology’s existence, from a power dynamic perspective many would argue we are better off with a world where everyone has access to it versus a world where just a few governments and wealthy institutions do.

Deep Fake Technology as a Key Privacy Tool 

One of the most dramatic reversals of our age is the flip from a world where by default privacy was cheap and publicity was expensive, to the opposite scenario.

It isn’t that privacy is going to be impossible for people to have in the future, it is just going to be expensive. Regardless of any regulation, because the fundamental technology for observing and saving information is getting so cheap and widely available, the default assumption is going to become that people and their actions are public unless explicitly private—not the other way around.

Once you accept this, the question is only, what does cost-effective privacy look like for those who want it?

Formal privacy where you keep information from ever leaking out in any form is extremely expensive to set up and maintain. No matter how technically sophisticated you are, you take on enormous counterparty risk with each and every person you interact with, and a single mistake can compromise your entire privacy system. 

It is relatively less expensive to generate at high volume many believable but untrue information trails—a practice known in some circles as “chaffing”—in order to hide truths in a sea of lies.  

If you believe that access to inexpensive privacy will be important in the future, you can argue that the ability for everyone to generate convincing fakes at scale is an important privacy technology. If your choice is between only a few institutions having access to cheap privacy, or everyone having access to cheap privacy, it is valid to choose the latter. You would want to see deep fake technology distributed as widely as possible.

Deep Fake Technology to Differentiate Reality from Fantasy

Beyond the privacy argument, broadly accessible deep fake technology could help us with one of the biggest shifts humanity is going to have to digest in the coming years, which is the move away from clear reality and toward a default state of living in a hybrid fantasy-reality environment. 

We used to have reasonably clear lines between the “real world” and the “fantasy land” of books, stories, and movies. Historically, religion might have been a vehicle for blurring the lines, but most people would agree that the physical friction of the predigital world meant that it was easier in a day-to-day sense to separate fantasy from reality. 

In the coming years, especially with the rise of VR and AR, we are going to move further away from living in reality and toward living in hybrid reality. Our own personalized fantasies and what we find entertaining and engaging will be mixed into and overlaid on the world we see around us. And for many people, the fantasy part will become more compelling than the real world.

The problem with this mixed reality is that it destroys people's understanding of what is real and what is not—and their ability to reason sensibly about the real world as distinct from private fantasy.

Think of it as entering a period where people will have trouble distinguishing between their sleeping dreams and their waking lives.  

So, if you accept that this is going to happen, the question is, what can you do to best manage the transition and avoid major issues where people fail to distinguish between fantasies and reality?  

The first-blush answer might be to try to slow down the transition and have it happen gradually.  I believe, however, that ripping the Band-Aid off might prove more successful. People struggle with the middle ground and not knowing what they can and cannot trust. If deep fake technology were broadly available, however, I believe people would rapidly adopt a more discerning perspective to evaluating what is real and what is not. 

To take the dream analogy, if people know by default they are almost certainly awake or almost certainly dreaming, then they can function sensibly in either environment. What is dangerous is when you don’t know if you are awake or dreaming, which is what a world of gradual and asymmetric rollout of deep fake technology might yield.  

To internalize how big a deal this is, consider the small ways this is happening today. Right now people experience the hybridization of fantasy and reality in the form of things like “filter bubbles” in social media, where people follow others like themselves and don’t see other viewpoints. You can also consider the hyperreality of Instagram, where professional friends and photo editing take real lives and experiences and blur what is real and what is not. These have major impacts on how we live and function as a society, and yet this is all quaint compared to the immersive and continuous semifantasy reality we will all likely inhabit in the near future.

We might well be better off moving rapidly to a world where everything is assumed to be fantasy until proven otherwise.

The Importance of Privacy and Truth Symmetry

Personally, the thing I find scariest about modern technology is the possibility of massive imbalances in access and power.

We have to accept that some institutions and people are going to have access to deep fake technology. Pandora’s box is open and it cannot be shut again.

If some people have access to the benefits of deep fake technology in terms of its usefulness for generating privacy and shaping people’s understanding of reality, then we are much better off with that power being widely distributed versus narrowly held.

Some might try to argue that, as with nuclear weapons, even if Pandora’s box is open we should work hard to limit access. But I don’t think the analogy holds. Unlike nuclear weapons, deep fake technology does not have a complicated supply chain that you can crack down on. The only possible way to limit proliferation would be truly authoritarian control of access to memory and computing, which is perhaps the biggest social liability of them all.

Rather than fearing deep fake technology, we need to understand its uses and benefits, and rapidly switch to a world where we assume its broad availability.