Weekly Roundup: The Palace Intrigue of Emerging Technology Companies
How little philosophical musings of Silicon Valley matter to the regulatory world
The drama surrounding Sam Altman and OpenAI has gotten more than its fair share of press. How much it will really change AI is a topic of debate. For a more grounded take, I recommend this Ethan Mollick post aptly titled Not much is changing, a lot is changing.
From a corporate governance perspective, how the company was set up is worth investigating. It will be interesting to see how much of a role that played in the unfolding of last week’s events and if that structure is viable at all. The premature postmortems on ethical capitalism, however, probably lack the data to make them legitimate critiques.
The philosophical tensions between “techno-capitalists” and “effective altruism” or its offshoot, “long-termism,” have gotten a lot of attention as well.
Techno-capitalists believe “with the application of ample amounts of venture capital and outsized ambition, any sufficiently disruptive idea can take over the world — or at least, overthrow some sleepy business incumbent.”
Long-termism “is based on a belief that the interests of later generations, stretching far into the future, must be taken into account in decisions made today.”
Effective Altruist “adherents seek to maximise their impact on the planet to do the most good.”
These tensions between these philosophies are often reduced to a capitalist vs. doomsayer debate, but to do this is only to judge the moral gild. Those at the center of these philosophies share the same trait: They are the ones who know what is best, and They should be left alone.
For one group, this notion manifests in regulatory ideas like licensing regimes, where a select established group of companies can retain a monopoly of the technology. For another group, it is a self-selected board that should control the moral progression of AI. In either camp, it is hard not to feel that the concerns of a majority of Americans (real or perceived) are being written off as pedestrian musings, and the democratic institutions that are the most responsive to them are inconvenient at best.
Altruistic government
Toby Ord, an Effective Altruism philosopher in discussing OpenAI’s board structure with The Financial Times, says
“At least there was a company that was being held accountable to something other than bottom-line profit.” “I don’t see any alternatives,” he adds. “You could say government, but I don’t see governments doing anything.”
Unfortunately, Mr. Ord is empirically wrong. The debate around AI regulation is quite traditional: is the government doing too much or too little? Governments are doing quite a bit. Some highlights of AI regulation news just this past week:
The Biden Administration’s Executive Order on AI sent off a flurry of activity around the U.S. government (This might be the best synopsis I have seen thus far).
California, the home turf of many AI companies, released a generative AI report on Tuesday and has been at the tip of the spear with AI and other tech regulations.
The U.S. Federal Trade Commission is streamlining its process to determine if and how AI was used to commit a crime.
AI-related patent applications are up across all sectors—indicative of a functioning administrative state that rewards and protects innovation.
The National Institute of Standards and Technology (NIST) published a blog post this week highlighting past initiatives, like the AI Risk Management Framework, and future ones, like researching how best to measure AI inputs and outputs.
This week, Wired highlighted a lawyer leading the AI copyright charge against corporations, which would not happen without a (government-run) court and legal system that felt it could rule in AI-related cases.
There have also been some public-private interactions. Michigan announced the Capitol will use AI to enforce a gun ban, leveraging procurement power to shape the industry. The Federal government’s “AI Talent Surge” faces the same labor market problem all companies looking to hire AI talent do, something Amazon’s AI Skills Training seeks to solve.
Altruistic electorate
It is easy to take sides, viewing the government as the savior and looking envious at the EU’s regulatory regimes or, in contrast, ceding regulatory sovereignty to the free market, whether or not that includes enlightened altruists. The answer probably lies somewhere in between. The path the United States is on is perhaps the best for balancing innovation and safe AI development: utilizing existing regulatory agencies and laws to protect consumers and national security while not stifling capital at the onset of innovation.
There are some unaccounted-for wildcards, too. The recent congressional retirements, including a champion of digital rights in the heart of Silicon Valley, mean that there is an opportunity for companies to influence regulation with candidates they recruit and fund. Continued technological developments, like one last week, which improved machine learning training on synthetic data, could further disrupt the AI industry. Much-needed but unpassed legislation like privacy protections or the outcome of Google’s anti-trust trial could change many of Silicon Valley’s business models.
This is not to say the path is perfect; passing regulatory rules and moving lawsuits through the courts take time, but these are the tradeoffs for allowing the market to spur innovation. These are also the tradeoffs for a self-governing populace. Large groups of people need time to become educated and make decisions, however inconvenient that may be. The question the electorate, and by extension, the government, will need to answer is to what extent and for how long they wish the market to develop this technology with the current regulator regimes.