Weekly Roundup: The Real Lives of Tech Celebrities
What drives the AI industry's quest for regulation? Plus, a litany of judicial news and federal comment requests.
Although the much-hyped Senate meeting on AI was finished nearly three weeks ago, there is still a lack of coverage of the civil society groups in the room. A short, informative interview by MIT Technology Review with Inioluwa Deborah Raji, a researcher at the University of California, Berkeley, and conference attendee, illuminated a viewpoint from someone who isn’t a “tech celebrity.”
Ms. Raji observed that things got the most tense when auditing and evaluation regulations came up. She also noted the chasm between today’s AI technical reality and the promise it holds for the future:
Hyperbolic risks came up because [representatives from tech companies] would be trying to divert attention from the reality that a lot of the current-day risks are because the technology is actually not super great. Sometimes it will fail and behave in unexpected ways. More often, the failures impact those that are underrepresented or misrepresented in the data.
What do the tech celebrities want?
A Guest Essay in the New York Times, written by Bruce Schneier and Nathan Sanders, both affiliated with Harvard University, parses out different tech celebrities' ideologies and regulation desires. The essay:
Frames the debate. “Where you stand depends on where you sit.” Different people have different motives for their actions. Sometimes, it is a result of their position, like being the CEO of an AI company. Other times, it is a dogmatic future vision for humanity or a political ideology. The authors frame the debate in terms of the stakeholders:
To understand the fight and the impact [AI regulation] may have on our shared future, look past the immediate claims and actions of the players to the greater implications of their points of view. When you do, you’ll realize this isn’t really a debate only about A.I. It’s also a contest about control and power, about how resources should be distributed and who should be held accountable.
Names the factions. The authors name three categories of individuals advocating for AI control:
The Doomsayers. These people are primarily concerned with the long-term consequences of AI, such as AI enslavement of humanity or catastrophic global issues like climate change that will require AI-driven solutions to save humanity.
The Reformers. These people are focused on the immediate problems of the technology, like racial biases.
The Warriors. This group is primarily concerned with national security.
Assigns deliverables. The authors end with a few brief but directed paragraphs on the direction regulation should take:
Ultimately, we need to make sure the network of laws and regulations that govern our collective behavior is knit more strongly, with fewer gaps and greater ability to hold the powerful accountable, particularly in those areas most sensitive to our democracy and environment.
Pay no attention to…
The Google Anti-trust trial continues to be criticized for a lack of transparency. Hours of testimony and many exhibits have been shielded from the public:
[Judge] Mehta said at a pretrial hearing that it was difficult for him to know what business information deserved to be sealed.
“I am not anyone that understands the industry and the markets in the way that you do,” Mehta said to the lawyers. “And so I take seriously when companies are telling me that if this gets disclosed, it’s going to cause competitive harm.”
Multiple variables are involved in these decisions, and it is unclear if statements like Judge Mehta’s will make their way into the decision or if they will impact future litigation. However, it may be one example of how justices might defer to industry when dealing with closely guarded machine learning data sets and processes.
…The information behind the paywall
In a win for public information, an appeals court upheld a ruling that allows the government to make copyrighted standards available for free when incorporated into law. The opening paragraph of the ruling states (emphasis added):
Many private organizations develop and copyright suggested technical standards for an industry, product, or problem. Federal and state governments often incorporate such standards into law. This case presents the question whether third parties may make the incorporated standards available for free online. We hold that the non-commercial dissemination of such standards, as incorporated by reference into law, constitutes fair use and thus cannot support liability for copyright infringement.
As more AI regulation legislation is enacted, this ruling may impact the writing of laws that wish to incorporate industry standards, publishable data audits, or other measurable machine learning metrics.
Indemnify this
IBM and Getty Images join the growing list of companies willing to indemnify users of their generative AI systems against IP lawsuits. Both will also be transparent about the data sources used to train their systems.
A jury trial will decide an IP lawsuit. Reuters sued Ross Intelligence, accusing them of unlawfully using Reuters data to train their machine learning system. This may be the first trial by jury in an AI IP lawsuit.
Spanish prosecutors are investigating whether AI-generated pictures of naked teen girls constitute a crime. This case differs from other AI/CSAM investigations in that the real girls' non-nude photos were manipulated into nude ones, and that the accused are between the ages of 13 and 15.
Shot Spotter, a company that uses internet-connected microphones to triangulate gunshots for first responders, is the subject of civil society group Electronic Privacy Information Center petition asking the Justice Department to investigate violations of the Civil Rights Act. Shot Spotter has a long history in the "predictive policing" industry, having undergone different acquisitions and name changes over the years.
In other news
The National Conference of State Legislatures (NCSL) published a fantastic database of 2023 state-level AI legislation, including annotating bills that passed, failed, are pending, or vetoed.
The Food and Drug Administration (FDA) is seeking comment on AI in drug manufacturing.
The National Credit Union Administration (NCUA) amended a rule allowing credit unions “to participate in loans acquired through indirect lending arrangements, allowing…advanced technologies and opportunities offered by the fintech sector.”
Researchers may have developed a better way to measure biases in computer vision systems by increasing the number of evaluation methods.
The Oak Ridge National Laboratory announced the opening of a new center to study the effects of artificial intelligence. The laboratory is a federally funded R&D lab in Tennessee with a long history of contributing to energy and national security research.