Weekly Roundup: Investment bans, data scarcity, and copyright laws
Breaking down the ban on tech investment in China, a look at the free market's approach to data, and can AI art be copyrighted?
Tech investment ban impacts AI-adjacent sectors
Last week I briefly mentioned the new Executive Order (EO) banning investment in specific technologies (semiconductors, quantum computing, and AI). The order, invoking a national emergency, is worried that “countries of concern” (China, Hong Kong, and Macau) will be able to exploit:
…certain United States outbound investments, including certain intangible benefits that often accompany United States investments and that help companies succeed, such as enhanced standing and prominence, managerial assistance, investment and talent networks, market access, and enhanced access to additional financing.
Buyout groups have already felt the impact of US-Chinese investment constraints, even before the order. The Financial Times reports that “buyout groups struck deals in China worth $47bn in 2021, but that fell rapidly to just $2.4bn in 2022 and $2.8bn so far this year.” The investment decline may reflect concerns that these rules tend to “drift” outside their initial targeted area.
The podcast below provides an excellent primer on export controls in the current context of emerging technology and China, addressing, among other items, how to judge the efficacy of controls and the necessity for supportive partner nations.
That the United Arab Emirates and Saudi Arabia are buying chips used for developing AI, and the latter’s data analytics lab is staffed primarily by Chinese researchers, makes the podcast an even more timely listen.
Old regulations, new questions
The EO is an interesting regulatory approach for four reasons:
It uses existing legal frameworks to manage emerging technology. Omnibus AI legislation isn’t needed to begin regulating the technology. (A fantastic piece published this last week in Lawfare discusses this exact topic).
It is probably a harbinger of things to come for “AI adjacent” sectors. It dictates actions in industries, in this case, private equity, that are not directly engaged with the targeted technology.
China’s domestic policy is an issue. While not stated explicitly in the EO, China’s domestic surveillance program is well known, and “mass-surveillance capabilities” is listed in the Treasury Department’s proposed rule.
“Potential” can be a problem. The EO is concerned about technology with the “potential to significantly advance the military, intelligence, surveillance, or cyber-enabled capabilities.” What the time horizon is on “potential” is unknown.
Dual use, dual problems
The U.S. is working to maintain the duality of a secure national security infrastructure and an open society. Working through that very complex problem also requires tackling an unsolved, equally complex one: how to control dual-use technology.
An AI product’s end use is being considered as a potential metric for solving the dual-use problem. The United States Treasury Department’s (USDT) proposed rule that is open for public comment seems to understand the issues connected to this metric. On lines 45 and 47 in the proposed rule, it asks:
To make sure the development of the software that incorporates an AI system is sufficiently tied to the end use, two primary alternatives are under consideration: “designed to be exclusively used” and “designed to be primarily used.” What are the considerations regarding each approach? Is there another approach that should be considered?
What analysis or considerations would a U.S. person anticipate undertaking to ascertain whether investments in this category are covered?
The last question is an equally difficult one. Facial recognition technology has multiple, wide-ranging uses and names:
“Facial recognition” identified rioters on January 6th and war dead in Ukraine.
“Facial analysis” prevents underage users from accessing pornography in Utah.
“Biometric facial comparison technology” is used at ports of entry in the U.S.
Apple offers the ability to unlock iPhones and use Apple Pay with a user’s face.
It is unclear if 1) the nuances of AI end use will be lost and make any iteration of the technology subject to the ban, 2) if it will make enforcing the rule impossible, or 3) if USDT can navigate the use cases while still meeting the intent to the EO.
Complicating the dual-use issue even further is the singularity of the Chinese system—something the EO notes specifically:
Moreover, these countries eliminate barriers between civilian and commercial sectors and military and defense industrial sectors, not just through research and development, but also by acquiring and diverting the world’s cutting-edge technologies, for the purposes of achieving military dominance.
Another excellent podcast by SCSP gives China’s perspective on these regulatory actions, the difficulty in discerning where (or if) state control stops, and how they view these technologies as a cure to their demographic woes.
The proposed rule is open for public comment until September 28, 2023. The Brookings Institution published guidance in 2018 on how to write a public comment.
Can the market fix it?
“AI is setting off a great scramble for data,” The Economist titled a story last week. In it, the correspondent explains:
The two essential ingredients for an AI model are datasets, on which the system is trained, and processing power, through which the model detects relationships within and among those datasets. Those two ingredients are, to an extent, substitutes: a model can be improved either by ingesting more data or adding more processing power.
Regarding processing power:
Venture capital is chasing solutions to this problem, offering their startups access to chips during the initial phases of their venture.
The federal government is also part of the solution. The National Science Foundation’s ACCESS program helps make computing power available to organizations.
Other companies are moving off of cloud computing entirely, a less expensive and potentially more efficient model.
Terms and conditions
As for the data part of the equation, IP lawsuits and basic supply and demand have increased data access costs. Different companies have different takes on this issue:
In June, Reddit, having a plethora of data, raised prices for accessing specific amounts of its data and monetizing it.
Zoom updated its terms and conditions (T&C) to allow the company to use data transmitted over its platform.
Large AI companies (AnthropicAI, OpenAI, and Meta) have entered into agreements with Zoom to access its data troves.
On the other side of the spectrum, The New York Times is restricting the use of its data in the T&C and has not ruled out legal action against OpenAI.
Open datasets are one potential solution. However, if a model requires large amounts of data and diverse data (which fixing training data bias certainly will), open datasets may not be a silver bullet for smaller AI companies.
It is not unreasonable to speculate about the rise of data monopolies or data cartels (especially for niche data areas). Likewise, this area may be prime for the Federal Trade Commission (FTC) to step in, particularly if the cost of accessing data shuts off market access—something central to the Chair’s philosophy.
At the federal level
Senator Mark Warner (D., Va.) sent a series of letters to tech executives outlining his concerns about AI. This comes the same week Democrats in the House formed an AI working group.
The General Services Administration (GSA) is seeking volunteers to test its new biometric login method using facial recognition. The push came after finding that Login.gov does not meet NIST ID proofing standards.
Cybersecurity & Infrastructure Security Agency (CISA) released a statement Friday stating AI products must be “secure by design.” The agency connects existing cybersecurity policies to AI products. In short:
…manufacturers of AI systems must consider the security of the customers as a core business requirement, not just a technical feature, and prioritize security throughout the whole lifecycle of the product, from inception of the idea to planning for the system’s end-of-life [and] AI systems must be secure to use out of the box, with little to no configuration changes or additional cost.
New EU laws force companies to make changes
‘More or less every different provision of these laws requires a process change, an architectural change, or both,’ said Kent Walker, Google’s president of global affairs.
One result of the new regulation is that TikTok is letting users shut off its algorithm in the EU. The impact of these laws may be insignificant for U.S. users. Close to 50% of Americans still favor banning TikTok, and the language of recent bans (Montana and NYC) shows that it seems to be more about China than cognitive liberty.
In other news
Schools looking to curb AI-enabled cheating may need more creative solutions than AI detection tools. These detection tools can be biased toward international students because they use syntax complexity and word choice as indicators.
The regulation wins for robotaxis in San Francisco are already being walked back after a series of incidents, including a collision with a firetruck.
Kansas Governor Laura Kelly instructed agencies to adopt a statewide AI use policy approved last month. This remarkably straightforward and short (3-page) policy lays out clear and specific AI use regulations.
The Brookings Institution published an in-depth comparison of existing and proposed state and federal laws protecting children from online harm. The upshot for AI regulation is if (or how) AI is used to solve the age verification problem and if AI platforms will fall under the definition of “social media platform.”
What is art, anyway?
A federal court ruled against a plaintiff trying to copyright AI-produced art.
The ruling uses a combination of case law, custom, and sometimes strained metaphors (including a case involving copyrights of divine beings).
It touches on an interesting legal standard. Using a camera as a technological metaphor (emphasis added):
A camera may generate only a ‘mechanical reproduction’ of a scene, but does so only after the photographer develops a ‘mental conception’…Copyright has never stretched so far, however, as to protect works generated by new forms of technology operating absent any guiding human hand, as plaintiff urges here. Human authorship is a bedrock requirement of copyright.
The judge acknowledges there are many unanswered questions about AI-generated art, although the case before her “is not nearly so complex.”
Some of her questions:
How much human input is necessary to qualify the user of an AI system as an “author” of a generated work,
[What is] the scope of the protection obtained over the resultant image,
How to assess the originality of AI-generated works where the systems may have been trained on unknown pre-existing works,
How copyright might best be used to incentivize creative works involving AI.
Some of my questions:
Is it reasonable to expect a cognizable threshold for the “guiding human hand?” What discipline(s) should be used to decide the threshold for how much AI is too much? Is this an engineering question, political question, or artistic one?
Does the engineer(s) who trained the machine deserve some credit? When does the creative process start and stop? This may be an absurd question when using the camera analogy, but maybe not so absurd if applied to the machine learning processes.
What impact will this have on AI tools for artists with disabilities? Does a color-blind painter have the ability to develop a reasonable “mental conception” to warrant a copyright if they choose to use generative AI to assist with color coordination?
The future of this case may be bleak. The Supreme Court has already turned away a challenge to U.S. patent law and AI-generated inventions. The plaintiff is the same in both cases.
Do you belong to a sector or industry concerned with “AI adjacent” regulation? If so, send me an email. I’d love to hear your thoughts, concerns, and ideas!