Weekly Roundup: EO AI Act's 90th
The EO on AI hits its 90 day mark and Commerce moves toward new bans.
It has been 90 days since President Biden signed The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence last October. Since then, executive agencies have issued a whirlwind of regulations, calls for comment, and rule proposals to comply. The White House has reported it has met all the 90-day benchmarks in the order. Some of the significant milestones listed in the readout:
Managing Risks to Safety and Security
Used Defense Production Act authorities to compel developers of the most powerful AI systems to report vital information, especially AI safety test results, to the Department of Commerce.
Completed risk assessments covering AI’s use in every critical infrastructure sector.
Innovating AI for Good
Launched a pilot of the National AI Research Resource—catalyzing broad-based innovation, competition, and more equitable access to AI research.
Launched an AI Talent Surge to accelerate hiring AI professionals across the federal government, including through a large-scale hiring action for data scientists.
Announced the funding of new Regional Innovation Engines (NSF Engines), including with a focus on advancing AI.
Established an AI Task Force at the Department of Health and Human Services to develop policies to provide regulatory clarity and catalyze AI innovation in health care.
One of the milestones is a recent proposal from the Commerce Department to regulate IaaS or cloud computing.
Infrastructure as a service (IaaS) is a type of cloud computing “that offers essential compute, storage, and networking resources on demand, on a pay-as-you-go basis.” The concern around IaaS:
Foreign malicious cyber actors have utilized U.S. IaaS products to commit intellectual property and sensitive data theft, to engage in covert espionage activities, and to threaten national security by targeting U.S. critical infrastructure. After carrying out such illicit activity, these actors can quickly move to replacement infrastructure.
ChinaTalk wrote a fantastic, in-depth piece on the new regulation that is worth the long read.
While Commerce determined a blanket ban on high-end AI chips going to China was seen as necessary due to the difficulty in figuring out who was using them in the PRC, cloud offers a much more fine-grained approach to distinguish between different users — as well as cutting off access to hardware. This regulation makes it possible for cloud providers to implement “Know Your Customer” (KYC) schemes to help prevent misuse of AI computing. KYC has been suggested by some policy researchers, industry leaders like Microsoft, and was further fleshed out in a recent paper by the Center for the Governance of AI.
All these ideas roughly map onto what Commerce now plans to do. The proposed regulations will require US cloud providers to implement “Customer Identification Programs” (CIPs): risk-based processes for identity verification and recordkeeping. US cloud providers must develop these and report on their progress to Commerce; they must also ensure that foreign resellers of their services also implement and maintain CIPs. The new rule would also require US cloud providers to report how they plan to detect this activity.
While the regulation contains reporting criteria and will require more due diligence on cloud computing companies, there is still a slew of unanswered questions, particularly surrounding technical thresholds for reporting. The upshot, however, is that BIS now has one more tool to restrict bad actors from adversarial nations.
Low bandwidth and forgotten customers
While ensuring markets remain competitive and barriers to entry are not insurmountable, it can often be taken for granted that customers already have equal access to products. Likewise, regulations often assume there is parity in products across borders.
Without access to high-speed internet, running any form of web-based AI service is beyond impractical, thereby prematurely eliminating substantial demographics of users.
“Last week, the city council in Los Angeles, Calif. passed a motion banning “digital discrimination,” which is when internet service providers (ISPs) inequitably deploy high-speed internet connections or disproportionately withhold the best deals for their services from racially or socio-economically marginalized neighborhoods.”
The Markup published the above story and ran an investigation in 2022, “showing how ISPs in 38 U.S. cities, including AT&T in Los Angeles, were offering high-speed broadband connections for the same price as sluggish ones to different households in the same city.”
Online child safety was the subject of a Senate Judiciary Hearing this last week, but content moderation is usually conducted in English, even though not always by non-native speakers.
“[The Center for Democracy and Technology] is launching a new project that focuses on content moderation policies and measures in the Global South. This research will critically examine how content moderation systems operate in non-English contexts, particularly in indigenous and other languages of the Majority World (i.e., the Global South.)”
Over the years, evidence has shown that tech companies implement policies and practices in moderating non-English language content that can have the effect of impeding individuals from speaking freely or accessing information in their native language. For example, according to one whistleblower, Facebook allocates 87% of its spending on misinformation countermeasures to English content, despite only 9% of its users being English speakers.
In other news
Apple ramped up its autonomous vehicle program last year, a program it has worked to keep secret. This is somewhat surprising given the safety troubles other companies have had.
Testing AVs to ensure they are safe enough to drive on public roads is a monumental task. One company is guaranteeing this safety by focusing on the perception part of the equation, as opposed to just machine learning.
The EU’s AI Act passed a major test vote last week and is set to become law in a few months, pending a final vote in the EU Parliament.
The FCC is moving to explicitly outlaw AI-generated robocalls.
“The Office of Management and Budget wants to know how privacy impact assessments — analyses of the government’s use of personal information about individuals required of agencies — might be improved.” This call for information results from the AI EO and comments are due April 1, 2024.