Weekly Roundup: The EU Acts, and AI Phone Banks
Highlights from the agreed-upon AI Act and addressing the AI-generated campaign robocaller
EU’s big act
The big regulatory news this past week was the European Union's agreement on a landmark law to regulate artificial intelligence. According to the New York Times
The law, called the A.I. Act, sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security. The law still needs to go through a few final steps for approval, but the political agreement means its key outlines have been set.
The act still must be passed by the EU’s 27 member nations' parliaments and the council before it becomes law.
The regulation, which uses a risk assessment metric as well as model size to determine the scope of regulation, has multiple aspects. A few of the significant takeaways from MIT Technology Review:
Imposes “legally binding rules requiring tech companies to notify people when they are interacting with a chatbot or with biometric categorization or emotion recognition systems.”
Requires companies “to label deepfakes and AI-generated content, and design systems in such a way that AI-generated media can be detected.”
Requires “foundation models and AI systems built on top of them to draw up better documentation, comply with EU copyright law, and share more information about what data the model was trained on. For the most powerful models, there are extra requirements.”
The AI Act bans specific applications of the technology:
Biometric categorization systems that use sensitive characteristics
Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases like Clearview AI
Emotion recognition at work or in schools
AI systems that manipulate human behavior
AI that is used to exploit people’s vulnerabilities
Predictive policing is also banned, unless it is used with “clear human assessment and objective facts, which basically do not simply leave the decision of going after a certain individual in a criminal investigation only because an algorithm says so”
Systems developed “exclusively for military and defense uses” are free from regulation under the act.
Always a critic
The act, of course, is not perfect. French President Emmanuel Macron is worried that the act will stifle competition. Enforcement is another concern. While some cite “the Brussels effect” as a reason that the EU will be a forcing function for the rest of the world to adopt its AI regulations, there are a few reasons that this law will be different than past omnibus regulatory efforts. Summarizing The Economist’s concerns:
In the case of the GDPR, national data-protection agencies are mainly in charge, which has led to differing interpretations of the rules and less than optimal enforcement.
Brussels’ new “AI Office” and national bodies will lack the expertise to prosecute violations.
“The incentives in AI are different: AI platforms may find it easier to simply use a different algorithm inside the EU to comply with its regulations. (By contrast, global social-media networks find it difficult to maintain different privacy policies in different countries.)”
“By the time the AI Act is fully in force and has shown its worth, many other countries, including Brazil and Canada, will have written their own AI acts.”
The problems notwithstanding, even the harshest of critics recognize being first out of the gate to regulate a new and rapidly evolving technology is no easy feat. For a more detailed synopsis, The Center for Democracy and Technology published a bulletin this past week that includes links to their past commentary on specific parts of the bill.
Equalizer or equivocator?
“Hello. My name is Ashley, and I’m an artificial intelligence volunteer for Shamaine Daniels’ run for Congress,”
-Ashley, the AI robocaller in a U.S. Congressional election
The Federal Elections Commission leaned into proposing new rules to regulate AI in campaigning, even when it was unclear what the impact of AI would be. Prescient or self-evident, AI is and has already been used in election campaigns.
New York Mayor Eric Adams used deepfakes to send campaign messages in a variety of languages he does not speak.
The U.S. Republican National Committee aired an ad directed against President Biden, showing a bleak future if he is re-elected.
Governor Ron DeSantis used an altered photo of former President Donald Trump.
Bangladesh’s January election has been plagued with AI-generated misinformation.
The dangers of AI-generated misinformation are more than just generating videos of things that did not happen. In India, one politician said that a video, one later verified as accurate, was fake. Accusing stories, pictures, and videos to be fabricated when they are not may not be new, but the proliferation of AI tools makes brushing aside truthfulness all that easier and more believable.
Although the actual impact has yet to be measured, the Brookings Institute found that the limited academic literature still held some concerns.
[It] suggests that some of the concerns [of AI in elections] have been overstated, while others have been understated. There is little evidence that either political ads or single pieces of online misinformation have strong capacity to change people’s minds, nor is there much evidence to suggest GenAI will change their impact.
Where to focus solutions?
Brookings’ policy recommendations to address AI in elections were heavily focused on the outcomes of generative AI content. They recommended civil rights-oriented legislation, such as outlawing voter suppression and increasing enforcement of existing statutes. Additionally, they recommend a combined literacy and factual information campaign.
In an opinion piece for the MIT Technology Review, Eric Schmidt looked at the outputs of AI, proposing a more technical approach to combating misinformation. Verifying human users and IP addresses and identifying deepfakes are three of his six proposals for combating election misinformation, most of which focus on preventing false information from reaching its target.
Preventing use is the common regulatory approach the FEC is working toward and is also being tried in many states. About a dozen states have laws directly addressing AI in elections and campaigns or laws on the books on “deceptive campaigning”—laws that might be used to control deepfakes.
Don’t forget the tech
Whatever style of solution policymakers are drawn toward (hopefully a combination of all three approaches), the solution still has to be technically feasible. This month, The Center for Security and Emerging Technology published a primer on controlling LLM outputs. The 20-page document gives a brief overview of how LLMs are trained, looks at open and closed models, and “explains four popular techniques that developers currently use to control LLM outputs, categorized along various stages of the LLM development life cycle:
Editing pre-training data
Supervised fine-tuning
Reinforcement learning with human feedback and Constitutional AI
Prompt and output controls
Each solution has pros and cons, and none is a silver bullet. Even taken in aggregate and ignoring the number of hours and compute power required to use every method, there is still the problem of bad actors. Unfortunately, once misinformation hits an audience, the damage is done.
What exactly is the problem?
Separating generative AI from information warfare is no longer possible. Crafting policy solutions requires an understanding (and regulation) of both. Policymakers should take care, however, not to muddle the varying problems. On the one hand, there is the problem of candidates, their proxies, or foreign actors spreading lies—something enabled through AI. While Brookings did not find overwhelming evidence of AI election influence, they did find:
Smaller, down-ballot races may be more susceptible to the impact of political ads since there’s often much less advertising in these races, voters are less familiar with the candidates, and there’s less oversight and media attention.
On the other hand, optimizing campaign infrastructure and operations, like using an AI robocaller, could make a new group of underfunded, less well-known candidates viable. It could also lower the barrier to entering an election so much that a new swath of citizens can run for office. This technology could potentially allow dark horse candidates access to hundreds of thousands of voters they may not have been able to access without the assistance of a campaign war chest or political party apparatus. The barrier to entry for misinformation has been substantially lowered, but so has the barrier for unconnected citizens to engage in democracy.
Over the weekend, Ashley called thousands of Pennsylvania voters on behalf of Daniels. Like a seasoned campaign volunteer, Ashley analyzes voters' profiles to tailor conversations around their key issues. Unlike a human, Ashley always shows up for the job, has perfect recall of all of Daniels' positions, and does not feel dejected when she's hung up on.