@BobbyChesney and I have a piece coming out in @ForeignAffairs on Deep Fakes in the January/February edition. For a preview, https://t.co/chl5MjoJ0X pic.twitter.com/oqrENugpa8
— Danielle Citron (@daniellecitron) December 11, 2018
A picture may be worth a thousand words, but there is nothing that persuades quite like an audio or video recording of an event. At a time when partisans can barely agree on facts, such persuasiveness might seem as if it could bring a welcome clarity. Audio and video recordings allow people to become firsthand witnesses of an event, sparing them the need to decide whether to trust someone else’s account of it. And thanks to smartphones, which make it easy to capture audio and video content, and social media platforms, which allow that content to be shared and consumed, people today can rely on their own eyes and ears to an unprecedented degree.Keep reading.
Therein lies a great danger. Imagine a video depicting the Israeli prime minister in private conversation with a colleague, seemingly revealing a plan to carry out a series of political assassinations in Tehran. Or an audio clip of Iranian officials planning a covert operation to kill Sunni leaders in a particular province of Iraq. Or a video showing an American general in Afghanistan burning a Koran. In a world already primed for violence, such recordings would have a powerful potential for incitement. Now imagine that these recordings could be faked using tools available to almost anyone with a laptop and access to the Internet—and that the resulting fakes are so convincing that they are impossible to distinguish from the real thing.
Advances in digital technology could soon make this nightmare a reality. Thanks to the rise of “deepfakes”—highly realistic and difficult-to-detect digital manipulations of audio or video—it is becoming easier than ever to portray someone saying or doing something he or she never said or did. Worse, the means to create deepfakes are likely to proliferate quickly, producing an ever-widening circle of actors capable of deploying them for political purposes. Disinformation is an ancient art, of course, and one with a renewed relevance today. But as deepfake technology develops and spreads, the current disinformation wars may soon look like the propaganda equivalent of the era of swords and shields.
DAWN OF THE DEEPFAKES
Deepfakes are the product of recent advances in a form of artificial intelligence known as “deep learning,” in which sets of algorithms called “neural networks” learn to infer rules and replicate patterns by sifting through large data sets. (Google, for instance, has used this technique to develop powerful image-classification algorithms for its search engine.) Deepfakes emerge from a specific type of deep learning in which pairs of algorithms are pitted against each other in “generative adversarial networks,” or GANS. In a GAN, one algorithm, the “generator,” creates content modeled on source data (for instance, making artificial images of cats from a database of real cat pictures), while a second algorithm, the “discriminator,” tries to spot the artificial content (pick out the fake cat images). Since each algorithm is constantly training against the other, such pairings can lead to rapid improvement, allowing GANS to produce highly realistic yet fake audio and video content.
This technology has the potential to proliferate widely. Commercial and even free deepfake services have already appeared in the open market, and versions with alarmingly few safeguards are likely to emerge on the black market. The spread of these services will lower the barriers to entry, meaning that soon, the only practical constraint on one’s ability to produce a deepfake will be access to training materials—that is, audio and video of the person to be modeled—to feed the GAN. The capacity to create professional-grade forgeries will come within reach of nearly anyone with sufficient interest and the knowledge of where to go for help.
Deepfakes have a number of worthy applications. Modified audio or video of a historical figure, for example, could be created for the purpose of educating children. One company even claims that it can use the technology to restore speech to individuals who have lost their voice to disease. But deepfakes can and will be used for darker purposes, as well. Users have already employed deepfake technology to insert people’s faces into pornography without their consent or knowledge, and the growing ease of making fake audio and video content will create ample opportunities for blackmail, intimidation, and sabotage. The most frightening applications of deepfake technology, however, may well be in the realms of politics and international affairs. There, deepfakes may be used to create unusually effective lies capable of inciting violence, discrediting leaders and institutions, or even tipping elections.
Deepfakes have the potential to be especially destructive because they are arriving at a time when it already is becoming harder to separate fact from fiction. For much of the twentieth century, magazines, newspapers, and television broadcasters managed the flow of information to the public. Journalists established rigorous professional standards to control the quality of news, and the relatively small number of mass media outlets meant that only a limited number of individuals and organizations could distribute information widely. Over the last decade, however, more and more people have begun to get their information from social media platforms, such as Facebook and Twitter, which depend on a vast array of users to generate relatively unfiltered content. Users tend to curate their experiences so that they mostly encounter perspectives they already agree with (a tendency heightened by the platforms’ algorithms), turning their social media feeds into echo chambers. These platforms are also susceptible to so-called information cascades, whereby people pass along information shared by others without bothering to check if it is true, making it appear more credible in the process. The end result is that falsehoods can spread faster than ever before.
These dynamics will make social media fertile ground for circulating deepfakes, with potentially explosive implications for politics. Russia’s attempt to influence the 2016 U.S. presidential election—spreading divisive and politically inflammatory messages on Facebook and Twitter—already demonstrated how easily disinformation can be injected into the social media bloodstream. The deepfakes of tomorrow will be more vivid and realistic and thus more shareable than the fake news of 2016. And because people are especially prone to sharing negative and novel information, the more salacious the deepfakes, the better...