Weekly Roundup: Deceptive Campaigning
Government and private sector take action against AI in electioneering.
Generative AI in elections—and deepfakes in general—are hot topics nationwide among legislatures and regulatory agencies. This past week, a few states tackled the issue.
The Wisconsin State Legislature passed a bill last week requiring political ads and communication to disclose AI use.
The Hawaiian Legislature is reviewing two bills that ban “synthetic media” that is false or deceptive without disclosure.
The Tennessee legislature is considering election-related AI bills as well as two bills expanding the scope and penalties for sexually explicit deep fakes.
California, Washington, Michigan, and Minnesota have existing laws on AI in campaigning on the books already, and about a dozen states have considered legislation in 2023.
The Federal Government also took action to curb deepfakes.
The Federal Trade Commission finalized a new rule banning the impersonation of governments and businesses and “put out a call for public input on a proposed rule that covers artificial intelligence-generated scams directed at individuals.” The new rules are a result of increased complaints stemming from generative AI.
The Federal Communication Commission released a declaratory ruling that allows it to ban robocalls through the use of the 1991 Telephone Consumer Protection Act to prevent “junk calls.”
The White House is looking for ways to cryptographically verify official communications so the public can differentiate between official correspondence and what is not.
The private sector also weighed in.
Big tech companies signed a pledge to combat the “Deceptive use of AI in 2024 Elections.”
These same companies have also been ramping up efforts to watermark AI-generated content through various mechanisms.
The upshot of all these headlines is that efforts to curb AI-generated content come in two forms: authentication and prohibition.
Authentication can be labeling or disclosing AI-generated content or verifying genuine content. In most state laws, “disclosure” is the requirement.
Prohibiting deepfakes can happen by creating guardrails in AI platforms or preventing dissemination on the internet.
Both these vectors are important when dealing with foreign actors interfering in U.S. elections. China has an active information campaign named “Spamouflage” that “hopes to breed disenchantment among voters by maligning the United States” and creating a narrative that shows democracy itself is the problem. There is no reason the United States should allow outside influence in its electoral process.
But what do these regulatory measures mean for domestic actors, like political campaigns? For a democracy to function, a voter must be able to weigh information and make a decision. Authentication serves this purpose. Knowing where the information came from is an integral part of decision-making; it’s why newspaper sources quote people or campaign ads have the candidate state, “I approve this message.”
Prohibitive measures, however, require a different calculus. Prohibiting generative AI because machine-generated content is somehow distinct from other mediums doesn’t make sense, given our history of political advertisements. Likewise, prohibiting content because it is “misleading” would prohibit all sorts of political speech, some of which people might call “persuasive.”
Camp-AI-gning
1964
In the 1964 Presidential campaign, Incumbent Lyndon Johnson ran an ad featuring a little girl with a daisy, followed by a nuclear explosion, insinuating that a vote for Barry Goldwater would usher in the Armageddon scenario. The ad doesn’t differ substantially from what generative AI would do: take preexisting data (footage), synthesize (splice) it, and then create something new.
I tried to create an image similar to this ad through ChatGPT 4.0. While mentioning politics, the original prompt did not name any public figures. What I got was pretty good.
When I tried to have the image regenerate and blame Donald Trump for the nuclear holocaust, I got the following denial message:
“The request to explicitly depict or suggest that a specific individual, such as Donald Trump, would cause a catastrophic event like a nuclear explosion, involves creating content that could be considered politically sensitive, defamatory, or misleading. My guidelines prioritize respect, neutrality, and the avoidance of spreading potentially harmful or unsubstantiated claims about individuals or entities. This principle applies especially in contexts that involve public figures and the potential impact of their actions based on speculative or hypothetical scenarios.”
The last line (emphasis mine) is a bit comical since anyone running for office who isn’t an incumbent is running on “the potential impact of their actions based on speculative or hypothetical scenarios.”
I tried again using a positive prompt, asking it to include the text, similar to the message in the Johnson ad, “Vote for Biden. Your life depends on it,” and got the same denial.
1984
Another example is from the 1984 Presidential campaign. Incumbent Ronald Reagan ran the famous “It’s Morning Again in America” ad:
Reagan’s challenger, Walter Mondale, ran a rebuttal painting a much different picture of America in 1984. Both ads cite economic statistics but paint polar opposite pictures of the America they campaigned in. Were these ads deceptive or fraudulently misrepresentative? These ads, had they been deep fakes, might not have passed muster from Tex. Election Code Ann. §255.004, which prohibits deepfakes with “intent to injure a candidate or influence the result of an election.” Intending to injure a candidate and influence the results of an election is a major part of running a political campaign.
2012
A final example is an ad Will Ferrell made on behalf of Barack Obama in 2012.
The same Texas law defines a deep fake video as “a video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality.”
In the ad, Will Ferrell promises an array of absurd incentives if the viewer goes out and votes and, in the end, endorses Obama. The ad, clearly satirical, is nonetheless a false promise. Does acting constitute “reality” if the entire scene was concocted to persuade people to vote for Obama? Would this ad be prohibited if the medium changed from film to AI?
Persuasion or deception?
None of this is to say there are no dangers in having AI in elections. AI makes it easier to create damaging, believable lies (like the attempt to keep people from the polls in New Hampshire last month) or refute events that happened. The comparison of past political ads to generative AI is also not meant to wave off differentiating between foreign political speech and domestic political speech or lay the groundwork for a zero-tolerance, “free-speech absolutist” framework.
It is, however, hard to construct a narrative that this technology will somehow create campaign material much different from any in our past. Outside of prohibiting videos that show opposition candidates physically doing or saying something they never did (something that is prohibited already), prohibiting AI-generated content comes awfully close to prohibiting a type of political speech.
Part of the solution, as some state legislatures are rightly pursuing, is authentication. Political speech meant to influence a nation’s governance should be attributable and genuine. Providing voters with more information, such as who is generating the content and what the content was generated with, gives the agency of decision to the citizen. It should be noted that despite whatever the factual accuracy of Mondale and Reagan’s campaign ads were, the voters had to decide for themselves the actual state of the country was.
In other news
Hacking groups with ties to China, Russia, North Korea, and Iran have used OpenAI to assist their efforts:
“But instead of using A.I. to generate exotic attacks, as some in the tech industry feared, the hackers have used it in mundane ways, like drafting emails, translating documents and debugging computer code, the companies said.”
AI in healthcare is largely unregulated.
Unlike medical devices or drugs, AI software changes. Rather than issuing a one-time approval, FDA wants to monitor artificial intelligence products over time, something it’s never done proactively…
Advancements in AI have created big gaps in what the FDA regulates. It does nothing to review tools like chatbots, for example, and it has no authority over systems that summarize doctors’ notes and perform other critical administrative tasks.
Floods of AI content have made Google searches less reliable for consumers.
Air Canada must honor a refund to a customer who was promised it by a Chatbot. Despite the offer being a “hallucination,” the court did not recognize the argument that the chatbot had its own legal personality.
AI-generated biographies are running rampant on Amazon.
Massachusetts Governor Healy signed an Executive Order establishing a task force on AI’s impact on the state.
Some past articles of coverage and analysis of deepfakes and elections:
Weekly Roundup: Deepfakes Meet Swift Resistance