Weekly Roundup: Finally, a Framework
The first substantive Congressional regulatory framework is released, activity in the judicial branch, and can the government procure its way to regulation?
Meetings, pledges, and frameworks
The closed-door meeting between Senators and tech executives on AI regulation received a lot of coverage this week. Wired, The Wall Street Journal, and The New York Times all have similar takeaways, although the latter may be the most succinct:
Elon Musk warned of civilizational risks posed by artificial intelligence. Sundar Pichai of Google highlighted the technology’s potential to solve health and energy problems. And Mark Zuckerberg of Meta stressed the importance of open and transparent A.I. systems.
While there was probably value in having a frank, closed-door meeting, it does not appear anything groundbreaking was said. From the same story (emphasis added):
“I do not understand why the press has been barred from this meeting,” Senator Elizabeth Warren, Democrat of Massachusetts, said after leaving the meeting. “What most of the people have said is, ‘We want innovation, but we have got to protect safety.’”
What did appear to be a change in the regulation conversation was the presence of civil society members, unions, and academics in the meeting alongside tech executives. This article gives a concise breakdown of each stakeholder’s interests and opinions on the subject.
Senators Blumenthal and Hawley released a short but detailed AI regulation framework. It answers some of the questions I raised last week and, if viewed as a to-do list for Congress, is far more substantial than the AI safety pledges a growing list of companies are making. However, it is still a far cry from actionable legislation.
In the international arena
The Financial Times provides a detailed snapshot of the spectrum of international AI regulation—with the United States and the United Kingdom taking a more industry-friendly approach on one end and China’s authoritarian model on the other.
The European Union, while having a more robust draft law and stricter regulations than the U.K. or U.S., is not ignoring the cries for innovative freedom often heard from the U.S. The EU is allowing AI startups access to its high-performance computers in exchange for adopting its AI governance standards.
Huawei’s phone release renewed debate on the efficacy of export controls. Two grounded reads on how this pertains to the bigger picture on semi-conductor export controls are in these two Substacks:
From the bench
The antitrust case against Google kicked off last week. The outcome of this trial could have follow-on effects on the AI industry. For those wanting to follow along, I recommend Big Tech on Trial, a Substack with daily updates and analysis.
One of the things Big Tech on Trial noted:
Judge Mehta even asked Google during its opening statement how he should define a market when there is no price for the product. Google answered that the question should be whether another company’s actions force Google to take competitive action in response to remain profitable.
It will be interesting to see how a market-sans-price argument or standard develops and if there is fallout for AI—Many AI products are free, as are some of their inputs. Additionally, the notion of a generative AI input monopoly is on the Federal Trade Commission’s (FTC) radar.
Additional legal news
An academic article published in the latest edition of the Journal of Intellectual Property Law & Practice, “Attribution problem of generative AI: a view from US copyright law,” lays out the limitations of current U.S. copyright laws.
State Farm was sued late last year for allegedly handling Black policyholders differently than White policyholders. This case is unique in explicitly calling out the role AI played:
Rather than discriminatory animus, Plaintiffs blame State Farm’s use of algorithmic decision-making tools that allegedly resulted in statistically significant racial disparities in how the insurer processed claims.
State Farm’s motion to dismiss was granted in part and denied in part. It will be interesting to see what level of machine learning transparency the court requires State Farm, or a third party, to grant for the trial.
In other news
“San Francisco has formally requested state regulators redo an August hearing that expanded robotaxi permits for Cruise and Waymo.”
Governor Newsom is expected to veto a bill banning autonomous trucks in California.
Authors selling their books on Kindle must notify users if their content is AI-generated. Amazon made an important distinction in its content guidelines between “AI-generated” and “AI-assisted.”
AI-generated content is defined by the company as “text, images or translations created by an AI-based tool,” even if substantial edits are made afterwards.
AI-assisted content is classified as that created by authors and sellers themselves but where AI tools are used to “edit, refine, error-check, or otherwise improve.”
Although this is not a legal standard crafted in legislation or a court of law, this distinction will become important as AI-copyright cases pass through the courts.
Procurement as regulation
Regulation has many flavors. Alongside case law, legislation, and existing regulatory frameworks, government agencies can also flex their procurement power to shape industry. Because the government will require AI to adhere to internal governance policies, procurement policies can have legislative-type effects on how AI is developed and marketed to other sectors. A few of the areas that the government is exercising its purchasing power:
The Department of Homeland Security (DHS) created a new position, Chief AI Officer, to “promote AI usage while maintaining safety protocols.” The task force that created the position also released two new policy directives:
1. Establish a set of defined principles by which DHS should adhere regarding AI usage in agency operations.
2. Prohibit the inappropriate usage of biometric systems, particularly surrounding facial recognition.
The Internal Revenue Service (IRS) will use AI for complex tax audits and investigations.
The Department of Defense’s (DoD) new Replicator program will reorganize existing funds and projects to develop “attritable, autonomous systems across multiple domains” and is “expected to cost in the range of the hundreds of millions.” CSET provided commentary on the new program and the challenges the DoD can expect to face.
California is also invested in using procurement to drive regulation. Gov. Newsom signed an executive order directing agencies to “find ways to reform public sector procurement so that agencies consider uses, risks, and trainings needed to improve AI purchasing.”