Weekly Roundup: Executive Order Recap
Selected analyses on the EO, where data processing is outsourced, and generative AI vigilantes.
The Biden Administration’s Executive Order on AI is a few weeks old, and agencies are in full swing in meeting some of the regulatory requirements.
The Pentagon and The Cybersecurity and Infrastructure Security Agency released AI strategies that align with the EO this month.
The National Institute of Standards and Technology (NIST) has multiple projects and calls for comment.
AI.gov, a first stop for citizens looking to understand and involve themselves in the U.S. government’s AI initiatives, received a makeover.
A tracker of all the deliverables required in the EO, including deadlines for public comment, is being maintained by The Center for Security and Emerging Technology (CSET).
Because the EO is a few weeks old, there have been good summaries and analyses of the order. CSET has an excellent and condensed summary of some high-level takeaways. The EO:
Leverages the power of the Defense Production Act to impose reporting obligations on the developers of the most powerful AI systems. Developers of these systems must notify the federal government when they train the model and disclose all red-teaming results.
Orders the creation of new standards to ensure the safe development of AI, including standards to protect against the AI-aided development of dangerous biological materials.
Directs agencies to streamline immigration processes to further attract highly skilled immigrants with expertise in AI and critical and emerging technologies.
Directs the NSF to launch a pilot program for the National AI Research Resource. Plans for a NAIRR have been in the works since 2020, and earlier this year, a task force recommended a $2.6 billion investment to build out a “shared research infrastructure” of publicly accessible computing power, datasets, and educational tools.
Calls for agencies to identify how AI could assist their missions and to take steps to deploy these technologies by developing contracting tools, training federal employees, and hiring more technical talent via a “National AI Talent Surge.” The Office of Management and Budget is seeking further input on these provisions via a Request for Comment.
Includes a host of other provisions related to protecting against the extant and near-term harms of AI systems, such as discrimination, fraud, and job displacement.
This summary was published in CSET’s monthly AI newsletter. The newsletter also recommends a more detailed synopsis by Stanford University’s Institute for Human-Centered AI.
Deflated resources for AI diffusion
ChinaTalk published two good discussions on the EO. In one, the author was particularly excited about the EO’s emphasis on “AI diffusion.”
The best way to create AI abundance and compete with China isn’t policy focused on R&D (though investment is certainly encouraged) or protectionist controls. Rather, we must pursue what we call a diffusion-centric AI strategy: focusing on rolling out this tech, building talent, and easing barriers to adoption.
Encouraging “a whole-of-government push to educate, bring in talent, and assess IT with the aim of creating a modern, AI-driven government that (hopefully) is more efficient, approachable, and fair” will undoubtedly drive AI diffusion in the government, and the country if implemented in U.S. immigration policy. However, the “whole of government approach” can’t reach its full potential without additional funding from Congress, and some of the EO’s heavy-lifters are grossly underfunded.
The National Institute of Standards and Technology (NIST) will be implementing many provisions of the order — and it currently has only twenty employees staffed on its Responsible AI team.
Foundational safety
The EO also wants to put up guardrails on dual-use foundation models it defined as
An AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.
Some examples of these security risks are making chemical, biological, radiological, or nuclear weapons and enabling powerful offensive cyber operations as examples.
ChinaTalk, in another post that is dedicated to analyzing the EO, discusses this issue in the context of biosecurity:
Bad actors can possibly create conventional bioweapons more easily, but that usually requires foundational models that need a whole lot of compute. At the moment, there’s nothing these models can do that you couldn’t already do with the Internet. They just make it faster. Even then, they introduce mistakes.
There are other barriers to entry for would-be bio-terrorists. Labs, materials, transportation, and deployment mechanisms are all things a group using AI to cause harm in this context would have to deal with. However, knowledge dispersal is still a major issue. While information may be available on the internet, people who pose a risk to national security may not always know what they are looking for or how to solve problems—something AI will help them with.
It is unclear if it is technically feasible (and sustainable) to prevent large foundation models from providing this type of information in the first place. It is also uncertain how much knowledge a terrorist group, for example, would need to alter an open-source foundational model for their own purposes. The administration seems to be aware of these risks, however, and the EO directs agencies to dedicate resources to solving them.
In other news
The United Kingdom will hold off on AI regulation
The UK has said it will refrain from regulating the British artificial intelligence sector, even as the EU, US and China push forward with new measures.
The UK’s first minister for AI and intellectual property, Viscount Jonathan Camrose, said at a Financial Times conference on Thursday that there would be no UK law on AI “in the short term” because the government was concerned that heavy-handed regulation could curb industry growth.
Underage workers are training AI
“AI is presented as a magical box that can do everything,” says Saiph Savag , Director of Northeastern University’s Civic AI Lab. “People just simply don’t know that there are human workers behind the scenes.”
Wired published an in-depth investigative piece on how massive amounts of data are labeled and organized. Oftentimes, it is children in impoverished countries who skirt the age requirements of their employers.
These workers are predominantly based in East Africa, Venezuela, Pakistan, India, and the Philippines—though there are even workers in refugee camps, who label, evaluate, and generate data. Workers are paid per task, with remuneration ranging from a cent to a few dollars—although the upper end is considered something of a rare gem, workers say.
The job creation, however, is a mixed bag. While some people interviewed in the story felt frustration when comparing their wages to U.S. tech workers, others jumped at the opportunity to earn salaries well above what they would earn normally and oftentimes paid in U.S. dollars.
A new tool lets artists add invisible changes to the pixels in their art before they upload it online
“…If [the art is] scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.”
The tool, Nightshade, “exploits a security vulnerability in generative AI models, one arising from the fact that they are trained on vast amounts of data—in this case, images that have been hoovered from the internet.”
Once AI developers scrape the internet to get more data to tweak an existing AI model or build a new one, these poisoned samples make their way into the model’s data set and cause it to malfunction.
Poisoned data samples can manipulate models into learning, for example, that images of hats are cakes, and images of handbags are toasters. The poisoned data is very difficult to remove, as it requires tech companies to painstakingly find and delete each corrupted sample.
This vigilante-style action may create some blowback for the artists using these tools. An analogous example may be that if a thief’s vehicle is damaged by a victim during a robbery, no one will consider the victim responsible for the damages. Should the victim go to the robber’s house and burn it down after the fact, however, it may be a different story.