By Will Bedingfield for WIRED
JEAN-LUC GODARD ONCE claimed, regarding cinema, “When I die, it will be the end.” Godard passed away last month; film perseveres. Yet artificial intelligence has raised a kindred specter: that humans may go obsolete long before their artistic mediums do. Novels scribed by GPT-3; art conjured by DALL·E—machines could be making art long after people are gone. Actors are not exempt. As deepfakes evolve, fears are mounting that future films, TV shows, and commercials may not need them at all.
Not even Bruce Willis. Last month the actor had the strange experience of “appearing” in an ad where he was tied to a bomb on the back of a yacht, growling “Mississippi” in a Russian accent. The Telegraph reported the deepfake was possible because he sold his performance rights. That wasn’t quite true—a representative for Willis later told reporters the actor had done no such thing. And as my colleague Steven Levy wrote a few days ago, the company who made the ad—the cheekily named Deepcake—never claimed to hold Willis’ future rights, but had struck a deal that allowed the company to map a digital version of his appearance onto another actor in a commercial for the Russian cell network Megafon.
Yet the question of “who owns Bruce Willis,” as Levy put it, isn’t only a concern for the Hollywood star and his representatives. It concerns actors unions across the world, fighting against contracts that exploit their members’ naivety about AI. And, for some experts, it’s a question that implicates everyone, portending a wilder, dystopian future—one in which identities are bought, sold, and seized.
In America, explains Jennifer Rothman, author of The Right of Publicity: Privacy Reimagined for a Public World, people have a right under various state laws to limit unauthorized appropriation of their identities, particularly their names and likenesses. The scope of protection varies state by state. Some have statutes protecting the “right of publicity” (a law barring unauthorized use of a person’s name, likeness, voice, or other indicia of identity without permission, usually for a commercial purpose), while others offer these safeguards through common, or judge-made, laws. A few have both statutory and common law protections.
The devil is in the details, though. “A private individual or company that simply creates a deepfake of a person, without more, does not obviously run afoul of the right of publicity,” explains David A. Simon, a research fellow at Petrie-Flom Center at Harvard Law School. In other words, if a Willis deepfake appears in an American ad for potato chips, then a claim becomes viable; if someone deepfakes Willis’ yippie-ki-yay swagger into a home movie and throws it on YouTube, the actor may not have much of a case. Under certain circumstances, deepfake makers are protected by the First Amendment. As one Northwestern University paper put it last year, “the government cannot prohibit speech merely because the speech is false; there must be some additional problem,” like defamation.
“The right of publicity requires the commercial appropriation of identity while tort law does not always require a commercial element,” explains Simon. “If an actor’s deepfake is manipulated to portray someone in a defamatory manner, or used to defame someone else, the actor may have the ability to sue in tort.”
Actors unions have been fretting over deepfakes for decades. The Screen Actors Guild—American Federation of Television and Radio Artists (SAG-AFTRA)’s interest began with sports video games, which started generating their own image rights controversies back in 2013. Even just looking at the rudimentary and blocky depictions of athletes in video games it was clear that the tech would develop in a way that would make it possible to drop actors into movies as easily as developers could drop quarterbacks into Madden. For more, click here.