Weekly Roundup: Congress to Propose AI Framework Tuesday
Senators to propose a new AI regulatory framework, AI-CSAM in the crosshairs, and other highlights from this week's AI regulation news.
Senators to unveil AI regulatory framework
“Senator Richard Blumenthal, Democrat of Connecticut, and Senator Josh Hawley, Republican of Missouri, plan to announce a sweeping framework to regulate artificial intelligence” this Tuesday during a hearing. This hearing is separate from the meeting between tech execs and Senator Chuck Schumer (D., NY.) scheduled for Wednesday.
Key points that will be included in the upcoming framework:
Requirements for the licensing and auditing of AI.
The creation of an independent federal office to oversee the technology.
Liability for companies for privacy and civil rights violations.
Requirements for data transparency and safety standards.
Wired, reporting the same story, questioned some elements of the proposed framework. Two of the major questions raised:
1. How will a new federal office manage “the broad range of technical and legal knowledge” across AI in multiple industries?
Creating a new government agency would be stunning in today’s legislative atmosphere, especially one encompassing everything AI touches. The success of the new office will hinge on a few items:
Mission Clarity. How this new office will interact with other agencies will be crucial as AI regulation is already taking place across multiple departments and offices. Will the new office take over all things AI? Only coordinate and advise among existing agencies? Will its mission be narrow enough (maybe some sort of unique ML audit authority) that it won’t clash with other agencies?
Organizational Clarity. Starting new agencies is challenging. Cybersecurity and Infrastructure Security Agency (CISA)had organizational problems two years after it was created. One of the initial critiques of CISA was stakeholder communications. An AI agency that wants to regulate multiple sectors will face similar problems.
Funding. Congress will face another continuing resolution showdown this month. Can an entire new office really be funded and stood up? If resources are cannibalized from existing agencies, how detrimental will that be to existing missions? A grounded, cost-benefit analysis will be needed to ensure there is actually enough unique work to warrant an entirely new office.
None of this is to say that a new office is a bad idea—government agencies manage disparate sectors all the time—but depending on the scope and scale of the office’s mission, it might be wise to manage expectations.
2. Will a licensing regime restrict innovation and further concentrate the industry?
Anti-anti-trust? If only a few companies are granted licenses to develop AI, will that make anti-trust litigation moot?
Which AI should be licensable? A licensing regime indicates that at least some AI is too dangerous to allow the general public to use or innovate with. Assuming that not all AI needs a license, the U.S. will probably need to adopt a risk-based categorization hierarchy, much like the EU.
The Center on Regulation and Markets at Brookings offers a regulatory framework that answers some of the questions. In a recently published paper, “Market concentration implications of foundation models,” they suggest that “producers of the most advanced foundation models may need to be regulated akin to public utilities.”
Compounding the challenges faced by new regulations is the “major questions doctrine,” a doctrine that limits agencies’ regulatory power when:
(1) The underlying claim of authority concerns an issue of “vast ‘economic and political significance’”
(2) Congress has not clearly empowered the agency.
This doctrine may be why agencies like the Federal Elections Commission and Securities and Exchange Commission have gone to great lengths to explain their proposed rules are updates rather than the agencies reaching into new territory. The doctrine, more so than AI itself, may be why a new office, or at minimum, new legislation, is necessary.
Details of the Senators’ plan will be unveiled on Tuesday and will hopefully provide some clarity. If a new office isn’t the answer, the Department of Energy, an agency with experience in emerging technology and an extensive network of national laboratories, is already lined up for the job.
Three additional sources related to this topic:
An article on questions surrounding software liability (questions that will also apply to AI liability regimes).
A podcast from The Economist that covers what data is, who owns it, and what it is worth (spoiler alert, not much). It also highlights that:
While privacy is trending toward benefiting the consumer, “big tech” will have more data than competitors.
The “walling off of data” can be characterized as a “data enclosure movement.”
The age of mass data consumption is over, and models will require “specialty data.”
Protecting the children
AI promises to be a crucial tool in the battle against Child Sexual Abuse Material (CSAM). The unfortunate flip side is that generative AI can also create troves of CSAM content. To combat this, Australia moved to have search engines remove AI-generated CSAM from results, and every Attorney General in the United States and its territories signed a letter asking Congress to form a commission to study the issue and propose solutions.
Various states’ legislation designed to shield children from online harm, like accessing pornography, is facing some pushback from civil rights groups. A proposed federal bill, Kids Online Safety Act (KOSA), has faced searing criticism from groups like the Electronic Freedom Foundation (EFF) for being a censorship bill.
The major obstacle facing efforts to place guardrails on what minors can view online are not legal or civil rights issues, but technical ones—a free VPN renders most restrictions ineffective.
In other news
Microsoft will “assume legal responsibility for any copyright infringement over material generated by the artificial intelligence software it offers…” reported The Financial Times.
The Grammys will not allow AI-generated songs to be eligible for awards—further expanding the AI-copyright battlefield.
As states grapple with AI regulation, tech lobbyists are moving in before their passage to educate legislators and influence outcomes.
“Google will mandate all political advertisements label the use of artificial intelligence tools and synthetic content in their videos, images and audio.”
AI is a thirsty technology. AI companies’ water usage recently spiked, and researchers have correlated it to AI development. This is an area to watch for “AI adjacent” regulation.