Pioneering Oversight Readers,
I will be traveling for work until mid-November. Unfortunately, this means that the Weekly Roundup will be on a hiatus. Fortunately, other writers will jump in to take its place.
There will be no publications on October 23rd or 30th, but on November 6th, Jesse Nuese, a cybersecurity researcher, will write a guest post summarizing recent reports and academic research on AI vulnerabilities.
The following week, on November 13th, Chelsea Quilling, also a cybersecurity researcher, will summarize her research on emergent challenges of authenticating digital evidence at the International Criminal Court. Chelsea will present her paper at The Hague on December 2nd.
The Weekly Roundup will return on November 20th, followed by a series of in-depth analyses on facial recognition regulation, what “human in control” really means, and the feasibility of international AI regulation.
A brief roundup
“The Food and Drug Administration (FDA) announced the creation of a new Digital Health Advisory Committee on Wednesday, intended to help support the development of digital health technologies and their regulation.”
For decades, the agency looked at medical devices the same way it looks at drugs: as static compounds. When the FDA approves a device, the manufacturer can sell that version. It needs the regulator’s signoff before upgrading to a new version. But AI-enabled devices often use algorithms designed to be updated rapidly, or even learn on their own. The agency is grappling with how to deal with this fast-moving technology while ensuring the devices stay safe.
One solution may be the FDA’s “Predetermined Change Control Plan.” This plan would include a product’s scope of expected changes and be reviewed by the FDA. “Once the device is approved, the company can alter the product’s programming without the FDA’s blessing, as long as the changes were part of the plan.”
A fellow to whom?
A detailed expository on the state of AI companies’ lobbying efforts was published by Politico this past week.
An organization backed by Silicon Valley billionaires and tied to leading artificial intelligence firms is funding the salaries of more than a dozen AI fellows in key congressional offices, across federal agencies and at influential think tanks.
The project, funded by Open Philanthropy through the non-profit Horizon Institute for Public Service, faced criticism in the story:
The network’s fixation on speculative harms is “almost like a caricature of the reality that we’re experiencing,” said Deborah Raji, an AI researcher at the University of California, Berkeley, who attended last month’s AI Insight Forum in the Senate. She worries that the focus on existential dangers will steer lawmakers away from addressing risks that today’s AI systems already pose, including their tendency to inject bias, spread misinformation, threaten copyright protections and weaken personal privacy.
…Raji worries that Open Philanthropy-funded experts could help lock in the advantages of existing tech giants by pushing for a licensing regime. She said that would likely cement the importance of a few leading AI companies – including OpenAI and Anthropic, two firms with significant financial and personal links to Moskovitz and Open Philanthropy.
“There will only be a subset of companies positioned to accommodate a licensing regime,” Raji said. “It concentrates existing monopolies and entrenches them even further.”
Whether licensing is a good policy solution or if future existential threats are worth devoting legislative energy to are worthwhile debates. Open Philanthropy may be right about both, but the perception that the fellows are lobbyists in all but name is, at best, a PR problem that could damage the legitimacy of any future regulatory regime.
In other news
Parents are increasingly concerned about the inability to remove their children’s likenesses from the internet. Even parents who eschew sharing their children’s images online can’t control when or where the images end up. In some extreme cases, deepfakes have been used to fake kidnappings.
AI Companies are having a hard time turning a profit.
AI often doesn’t have the economies of scale of standard software because it can require intense new calculations for each query. The more customers use the products, the more expensive it is to cover the infrastructure bills. These running costs expose companies charging flat fees for AI to potential losses.
“A survey of state technology leaders found that the [State Chief Information Officer] role has evolved from one concerned with building a state’s own tech infrastructure to one focused on acting as a broker of services.”
An opinion piece in The Guardian argues for stricter EU regulation on “robo firings.” The piece also provides an excellent recap of recent labor-AI news.
Saudi-Chinese technology partnerships continue to worry the United States.