Weekly Roundup: Are Deepfakes Preventable?
Tom Hanks' unauthorized likeness in a dental commercial drove this week's AI news cycle.
Deepfakes have far-reaching consequences across many policy areas. AI-driven deepfakes have substantially lowered barriers to entry for disinformation campaigns. They have made it so easy for fraudsters to impersonate someone online that a new Senate bill authorizes the USPS to assist businesses and government agencies by conducting in-person verifications on prospective clients and employees.
Deepfakes have non-nefarious purposes too. The entertainment industry uses “deepfakes” to create flashbacks, with the limits of the practice being the subject of contract negotiations. Tom Hanks, whose deep-faked image was used in an advertising campaign without his consent, provides an apt summation of the problem (emphasis added):
I could be hit by a bus tomorrow, and that's it, but performances can go on, and outside of the understanding that it's been done with AI or deep-fake, there’ll be nothing to tell you that it’s not me and me alone. And it’s going to have some degree of lifelike quality. That’s certainly an artistic challenge, but it’s also a legal one.
Determining if content is authentic or not is a big problem. The book AI 2041 has a chapter dedicated to this. The book’s fictitious society’s solution was to use AI to detect AI deepfakes. While researchers are working to make this a reality, the technology for detecting AI fakes is simply not yet there.
Regulatory solutions to this problem are endless, partly because no solution is perfect. Some of the many options:
Prior restraint. Two different prior restraint cases are making their way through the courts. To what extent and under what circumstances the government can control what is published online is precisely what these cases aim to clarify. Even considering an extreme example, where all prior restraint cases are deemed legal, exerting total government control over content on decentralized networks would be a monumental, if not technically infeasible, task.
Content moderation. Platforms could prevent deepfakes from ever reaching an audience. The platform-driven content moderation option is used in some form or another for regulating content, although to varying degrees of success. TikTok attempts to do this by sifting through vast amounts of uploaded content through an automated process, but even their algorithms overlook high-quality deepfakes.
Regulate end-use. Another option is to tightly regulate sector-specific end-use. The Federal Elections Commission proposes to do this by broadening its rule on deceptive campaigning to include AI. Regulating end-use would hold deepfakers accountable but wouldn't necessarily prevent the publication of deepfakes. Content published out of the country would be challenging to prosecute. If penalties are not harsh enough, the regulation will have limited effects—some may find it well worth a monetary penalty to use an illicit deepfake to, say, smear political opponents.
Content verification. A certificate that validates authenticity is another possibility. Websites already have certificate processes, and creating a similar structure for content producers may be possible. A third party could certify user accounts or content from trusted sources and publish these certificates. All of this requires, however, that the certification is effective, users pay attention to (and trust) it, and users ignore non-certified content and accounts.
Targeted lawsuits. The Recording Industry Association of America (RIAA) sued tens of thousands of people in the early 2000s for illegally sharing copyrighted content. This aggressive and controversial strategy didn’t do much to save music sales (although there were other contributing factors like the rise of streaming services). Despite its unpopularity, it is still an option and may be attractive to deepfake victims or those who feel platforms or governments are not doing enough.
The solution may not be completely regulatory. A cultural shift to a "zero-trust" form of content consumption may be warranted. Readers, viewers, and scrollers alike may have to assume content is fake until proven otherwise. Taylor Lorenz, in a recent podcast, highlights this user-driven method:
Media literacy is so important. People today, especially young people, cannot distinguish truth from fiction. They cannot understand when somebody is lying to them…I think we need protections in place, and I don't know that those protections necessarily need to come from the government, because I think they usually do bad regulation and it ends up kind of causing harm, but I think that we need to have these discussions and we need to force these platforms to be more responsible.
“Doing bad regulation” can also come in the form of doing no regulation. AI companies are trying to self-regulate. Many of them are self-publishing “AI constitutions” outlining safety measures and conduct for their AI platforms, yet these "constitutions," along with internal safety measures and guardrails, almost always fail. Failing AI models are so prevalent that a niche “AI insurance” industry has popped up.
The whack-a-mole problem that is AI-generated deepfakes will require more than one giant mallet. A multi-pronged approach across users, industry, and regulatory agencies will be needed to make any discernible dent in the problem. Ultimately, though, there may always be an irreducible minimum of deepfakes, and a type of caveat emptor attitude will be required.
In other news
The Department of Homeland Security will release guidance on AI and critical infrastructure. “The scope of this forthcoming guidance will address how to successfully audit front and back end systems, when to incorporate humans in automated processes and how to mitigate widespread, severe system failure.”
Uber was found non-compliant with EU data transparency laws after it failed to adequately provide data access to the “robo-firings” of two drivers.
The Center for Democracy and Technology (CDT) published two reports, one a response to the European Commission’s Proposal on High-Risk Classification and the other an analysis of how AI could lead to anticompetitive collusion. The latter is a good read for those interested in unintended market-centric consequences of businesses adopting AI.