With a kind of machine learning known as “deep learning,” a computer program can quickly teach itself to recreate a person’s image or voice, manipulate it—like a puppeteer controlling a puppet—and blend it seamlessly into an environment the person never inhabited. This kind of fake video or audio, a “deepfake,” can be very difficult to distinguish from genuine camera footage. As a consequence, the seemingly real political speech we see by a U.S. President or other world leader might be one that never occurred. Deepfake creators, in fact, have generated speeches of this kind to demonstrate the power of this technology including a video showing President Obama warning—in a speech he never gave—about the dangers of deepfakes, another showing President Nixon announcing the failure of the 1969 Apollo mission to the moon and the death of the astronauts on that mission and a fake Christmas speech by Queen Elizabeth II to mark the end of 2020. In this form, video and audio recordings or transmissions are no longer a window into remote events. They are instead a portal through which we see a hyper-realistic world that is fabricated and fictional.
When individuals deepen deception in this way, moving from fake words to the creation of fake evidence, does First Amendment protection move with them? Does the First Amendment protect them not only when they insert falsity into their own words, as the Supreme Court held in Alvarez, but also when they find ways to introduce into fabricated evidence such as a deepfake video? Where someone not only tells a verbal lie, but also—or instead—falsifies the kind of external evidence others would use to check the veracity of that lie, such as a web site apparently created by an independent source or a videorecording, does the First Amendment also protect this additional deception?