Weekly Roundup: Diving into the Details
Individual consumer protections get legislation and organizations confront the realities of AI adoption.
Protecting the consumer
Last week, Senators introduced “The Algorithmic Accountability Act of 2023” to protect individuals from “algorithmic decision making in areas like housing, credit, education and more.” Some highlights from the bill:
• Require the Federal Trade Commission (FTC) to create regulations providing structured guidelines for assessment and reporting.
• Require that companies assess the impacts of automating critical decision-making and report impact-assessment documentation to the FTC.
• Requires the FTC to publish an annual anonymized aggregate report on trends and to establish a repository of information where consumers and advocates can review which critical decisions have been automated along with information such as data sources, high-level metrics, and how to contest decisions.
• Fund the FTC to hire 75 staff and establish a Bureau of Technology.
There are striking similarities between this bill’s requirements and the Cybersecurity and Infrastructure Security Agency’s (CISA) vulnerability disclosure and assessment policies. It also seems that the bill’s authors learned from some of CISA’s growing pains. It has:
Enforcement mechanisms—something that already exists in the FTC.
A federal framework that can be used or built upon for assessment.
Mandated reporting. If the bill can maintain a serious mandated reporting requirement, it will be a huge win for consumers. This is probably the most vulnerable to being water-downed, however, as no company ever wants to voluntarily give up information that could impact competitiveness.
Paralleling the bill, The Consumer Financial Protection Bureau (CFPB) issued guidance about legal requirements if a company uses AI to make adverse decisions against consumers.
[If] adverse decisions [are] made by complex algorithms, creditors must provide accurate and specific reasons. Generally, creditors cannot state the reasons for adverse actions by pointing to a broad bucket.
For instance, if a creditor decides to lower the limit on a consumer’s credit line based on behavioral spending data, the explanation would likely need to provide more details about the specific negative behaviors that led to the reduction beyond a general reason like “purchasing history.”
Companies taking adverse action may have two avenues of compliance. The first is to have a human review every algorithmic decision. The second is to develop a model that explains why a decision was made. An example of this type of system is being tested in the U.K. by a company that is integrating decision-explaining chatbots in self-driving cars. It is possible a model like could meet CFPB guidelines if the explanations were specific and accurate enough.
Bean counting
Last week, I mentioned how the government can use procurement to regulate. This week, JD Supra published a snapshot of different committee’s appropriations bills. There is AI funding in nearly every bill, covering various purposes. JD Supra’s takeaway from their research:
While the appropriations bills themselves include few references to AI, the Appropriations committees may still add AI language in the various bills. With that said, it is clear from the Committee reports that AI funding and advancement is a priority for both the House and Senate.
The different allocation bills vary in specificity, but most appear to at least have concrete projects to spend the money on. Very few use vague terms like “expand usage of technology such as AI.”
Capabilities and limitations
Purchasing AI is only part of the equation, however. A WSJ article found that many companies will buy high-speed tech without planning for (or understanding) the required infrastructure updates:
The rapidly developing automation technology can help speed up operations and lift some of the burden off human workers, but the tools have a new set of requirements such as access to far more electrical power and a strong internet signal.
…upgrading a warehouse’s internet can be as simple as calling the internet provider to increase the bandwidth or as complicated as installing fiber-optic cable lines, antennas and server rooms, depending on the type of automation being added and the existing connections.
Another part of the equation is identifying a problem that AI can effectively solve. The New York City Police Department unveiled a “fully autonomous” security robot equipped with video cameras (but no audio) that can be used to call for help. It costs about $9 an hour and, Mayor Eric Adams emphasized, “doesn’t require bathroom breaks.” However, according to the article, the NY transit system is already full of cameras, subway crime is down 4.5%, and with ubiquitous cell phone coverage, it is unclear what advantage the robot has over existing systems.
In other news
The White House teases an AI Executive Order planned for this week.
The FTC’s nominees, two Republicans and one Democrat, all agreed that the FTC has a role in pursuing unfair and deceptive acts if AI was used in those acts.
State governors were especially active this week:
California Governor Gavin Newsom vetoed a law banning “self-driving trucks without a human aboard from state roads until the early 2030s.” Gov. Newsom said the bill was unnecessary as existing agencies are drafting regulations.
Pennsylvania Governor Josh Shapiro signed an executive order establishing guiding principles for the state’s use of AI and a governing board.
Virginia Governor Glenn Youngkin signed an executive order ordering a review of pathways for state regulation, identification of AI cybersecurity vulnerabilities, and developing a training path for students to become proficient in AI.
Requests for public comment
The U.S. Food and Drug Administration will produce a new strategy to guide emerging technology use, including AI data management. The FDA requests public comment by October 30.
The U.S. Copyright Office extended the deadline for public comments on a notice requesting information on AI-copyright topics. I wrote a background of AI-copyright issues and a summary of the notice when it was initially published.
The National Artificial Intelligence Advisory Committee (NAIAC) will host its next briefing with invited experts on Friday, September 29, from 12:00—2:00 p.m. (EST). The next NAIAC meeting is on October 19 at the Department of Commerce in Washington, DC.
The FTC will host a virtual roundtable on October 4 to discuss the impact of generative AI.
In the international arena
An Indian Court ruled in favor of actor Anil Kapoor, protecting his likeness against misuse. The court “acknowledged the actor's personality rights and restraining others from ‘misusing’ his attributes without permission. The order applies to the actor's rights globally, across media formats.”
The Australian federal police told their government it “uses AI to analyse data obtained under telecommunications and surveillance warrants.”
Poland received a GDPR privacy complaint against OpenAI that accuses the company of processing data illegally.
Spain established an AI supervisory agency, the first in the EU. “The Spanish Artificial Intelligence Supervisory Agency (AESIA) is expected to be operational within three months.”