Weekly Roundup: Deepfakes Meet Swift Resistance
Politicians and T-Swift confront deepfakes and the war on truth
Speak now
Over the past two weeks, the news cycle has been consumed by deepfakes, which will only increase as the United States edges closer to the fall election. Last weekend, Robocalls impersonated President Biden and told voters in New Hampshire that their vote didn’t matter, while a Super PAC backing his Democratic challenger, Dean Phillips, had their bot removed by Open AI because it violated “API usage policies which disallow political campaigning, or impersonating an individual without consent.”
The Biden robocaller was generated using a tool from ElevenLabs, a company that “recently achieved “unicorn” status by raising $80 million at a $1.1 billion valuation in a new funding round co-led by venture firm Andreessen Horowitz. Anyone can sign up for the company’s paid service and clone a voice from an audio sample.”
ElevenLab isn’t the only company making audio deepfakes a tricky but lucrative problem.
Researchers have also warned that the use of realistic but faked voice clips that imitate politicians and leaders are likely to spread, following instances in 2023 of allegedly synthetic audio being created to influence politics and elections in the UK, India, Nigeria, Sudan, Ethiopia and Slovakia.
Audio deepfakes are becoming an increasingly popular form of disinformation, according to experts, because of the advent of cheap and effective AI tools from start-ups such as ElevenLabs, Resemble AI, Respeecher and Replica Studios. Meanwhile, Microsoft’s research arm announced the development last year of a new company AI model, VALL-E, that can clone a voice from just three seconds of recordings.
Treacherous
It's not just campaigning but the entire paradigm of truth being threatened. In an aptly titled Financial Times story, “Fakes, forgeries and the meaning of meaning in our post-truth era,” The author dives into a deep history of fakes, beginning in an era before the uniquity of internet and artificial intelligence, and sums up the current problem:
Just think about the notorious tape from Access Hollywood, in which Donald Trump boasted of sexually assaulting women. It was released in October 2016 and caused a political explosion. Deepfake audio wasn’t part of the conversation then, but if it had been, Trump could easily just have said: “That’s not my voice on the tape.” The mere fact that deepfakes might exist creates a completely new kind of deniability.
In sectors like electioneering and journalism, this “new kind of deniability” means that organizations or individuals must go to greater lengths to prove facts than they ever have before. If every scandal, unsavory piece of reporting, or inconvenient truth could plausibly be a deepfake, then the threshold for truth will be extraordinarily high. This could manifest in a way where everything reported is false until proven beyond a reasonable doubt to be unimpeachably true. Or, it could manifest in an extreme version of “he-said, she-said,” where the truth-teller is simply the person with the more amenable personality.
reputation
Unfortunately, truth matters little in some sectors, especially once the content reaches its intended vector. Fake pornographic images of Taylor Swift appeared all over the internet last week. While there was an admirable effort by Swifties to bury the content that could not be removed, not everyone has Taylor’s profile or fan base:
In many ways, this is a nightmare scenario for anyone whose bodies are routinely sexualized and exploited—and especially for teenagers who are most likely to be harmed by AI nudes. Most recently, teenage girls in New Jersey reported that bullies had begun spreading AI-generated nudes of them in their school. And there have been various other incidents where abusers have used “nudify” apps to generate explicit pictures of classmates and online influencers.
The deepfakes have galvanized calls for criminalizing the sharing of non-consensual deepfake pornography, but it is unclear how much criminal penalties would act as a deterrent. Like all disinformation, once the content reaches its audience, the damage is done and near impossible to eradicate.
Death by a thousand cuts
People are concerned about losing jobs to AI, and despite the pontificating by economists on what effect AI will have on the labor market, stories like the one this week, where Sam’s Club will replace human receipt checkers with AI, cause anxiety. This anxiety, however, is not new. The MIT Technology Review wrote a long-form piece on the history of techno-labor anxiety:
In 1930, the prominent British economist John Maynard Keynes had warned that we were “being afflicted with a new disease” called technological unemployment. Labor-saving advances, he wrote, were “outrunning the pace at which we can find new uses for labour.”
Not every country views this as a crisis, though. Japan, facing a demographic problem, is replacing jobs with robots and AI across sectors as diverse as trucking, construction, and service industries.
It’s not all about replacement, either. AI in the HR process is causing an equal amount of anxiety. New York City’s landmark law regulating automated hiring decisions isn’t as effective as hoped. Burdensome reporting requirements, complaint-driven policing, and companies finding loopholes in definitions contribute to the failure. That said, a first-of-its-kind law is bound to meet this kind of pitfalls, and there is no reason the law cannot be updated in the future.
Stay stay stay
The Federal Trade Commission launched an inquiry into generative AI investments and partnerships. It will “scrutinize corporate partnerships and investments with AI providers to build a better internal understanding of these relationships and their impact on the competitive landscape. The compulsory orders were sent to Alphabet, Inc., Amazon.com, Inc., Anthropic PBC, Microsoft Corp., and OpenAI, Inc.”
“The Biden administration is preparing to use the Defense Production Act to compel tech companies to inform the government when they train an AI model using a significant amount of computing power. The rule could take effect as soon as next week.” -Wired
The New York Times ran a profile on the Copyright Office, “a small and sleepy office within the Library of Congress” with 450 employees now at center stage in AI regulation battles.
The National Science Foundation launched the National Artificial Intelligence Research Resource pilot. The program will make “federal resources — including advanced computing, datasets, training models, software assistants and user support — open and publicly accessible.”
The National Telecommunications and Information Administration is looking at how to create an auditing process to hold artificial intelligence systems accountable. “NTIA Administrator Alan Davidson said, ‘I think one of the things that we've seen is, like financial audits for the financial accounting system, there is going to be a role to play for audits in the AI ecosystem.” Eric Schmidt advocated for a similar idea using private, third-party auditors.
CSET published a primer on RISC-V, an open-source chip architecture that has the potential to water down U.S. chip export controls.
The United States Agency for International Development is calling for information in order to publish an “AI in Global Development” playbook incorporating NIST's AI Risk Management Framework. Comments are due March 1, 2024.