Weekly Roundup
Transportation, financial, and political sectors see changes, while the security sector braces for disruption
Autonomous Vehicles (AV) were in the news quite a bit this week:
The Wall Street Journal released dash cam footage of one of the 16 crashes under federal investigation that occurred between Teslas and emergency vehicles.
On August 10, California’s Public Utilities Commission (CPUC) approved Cruise and Waymo to offer driver-less taxi services in San Francisco. In a regulatory quirk, the CPUC has the final say over taxis, while the California Department of Motor Vehicles (DMV) issues permits to companies to deploy autonomous vehicles. Unfortunately for San Francisco:
Local representatives don't actually have much say over what these companies are doing on your streets. It's a state regulator that's going to decide whether those robotaxis can operate there. But the city folks, so the mayor's office, the police department, the fire department have said they've seen a lot of issues with these cars on the roads.
Despite the vote, CPUC Commissioner John Reynolds was clear that the jury is still out on the efficacy of AVs. “While we do not yet have the data to judge AVs against the standard human drivers are setting, I do believe in the potential of this technology to increase safety on the roadway.”
California’s regulatory moves stand in stark contrast to Arizona’s laissez faire approach. The criminal case stemming from a 2018 crash in Arizona involving an autonomous Uber vehicle came to a close last month. Wired has covered the story in detail, and their latest report provides additional information on other criminal liability cases involving autonomous vehicles.
Familiarity with the technology, however, may end up driving regulation. A recent study found that people familiar with a particular type of technology will support favorable public policies toward that technology. However, this support does not necessarily translate to consumer action, especially if the technology replaces a task the consumer is comfortable with.
Federal agencies began the process of adding additional regulations:
The head of the Securities and Exchange Commission (SEC) is concerned about AI’s involvement in the financial sector. His comments come as the SEC opens up new rule proposals for comment. These intentionally broad rules:
are designed to address the conflicts of interest associated with firms’ use of [Predictive Data Analytics] (PDA)-like technology when engaging in certain investor interactions.
While not explicitly addressing AI, the proposal instead uses a broad definition of the restricted tech as:
an analytical, technological, or computational function, algorithm, model, correlation matrix, or similar method or process that optimizes for, predicts, guides, forecasts, or directs investment-related behaviors or outcomes in an investor interaction.
Comments are due on or before October 10, 2023. Section I. C. of the proposal provides the most succinct overview.
The Federal Election Commission advanced a petition to regulate deceptive AI campaign advertisements. Whether or not this rule is needed since it is already a violation to intentionally deceive is a subject of debate. Public comments will open shortly.
While most state legislatures are in recess, efforts are ongoing to educate legislators and staffers on AI regulation. The National Conference of State Legislatures released a primer to assist lawmakers in understanding AI. In the associated media event announcing the primer, panelists said privacy and equity were among the states’ top concerns in regulating AI.
For further privacy related reading, The Markup is well versed in covering privacy. This in-depth article explaining those ubiquitous privacy policies we all agree to is a good place to start for anyone looking closely at privacy related regulations.
In the think tank world:
The Brookings Institute released commentary on generative AI in the classroom and the potential for a new digital regulator.
While, in the international arena:
President Biden’s technology investment ban in China and the continued development of autonomous warfare will also undoubtedly have international and domestic regulatory consequences.
To address concerns about international security, OpenAI and the Berkeley Risk and Security Lab (BRSL) at the University of California published the proceedings from a workshop “to propose tools and strategies to mitigate the potential risks introduced by foundation models to international security.” Attendees included representatives from the private sector, government, civil society, and academia participants.
Lastly, something to watch:
Will be how states react now that smart guns are finally on the market. New Jersey has a law mandating the sale of smart guns, but others oppose the weapons, citing an amalgamation of second amendment rights and privacy concerns.