Weekly Roundup: Big Tech Diplomacy
AI companies negotiate with the Chinese in secret despite ongoing anti-democratic activities.
Artificial Diplomacy
The Financial Times reported this week
Artificial intelligence companies OpenAI, Anthropic and Cohere have engaged in secret diplomacy with Chinese AI experts, amid shared concern about how the powerful technology may spread misinformation and threaten social cohesion.
According to multiple people with direct knowledge, two meetings took place in Geneva in July and October last year attended by scientists and policy experts from the North American AI groups, alongside representatives of Tsinghua University and other Chinese state-backed institutions.
The White House knew about the meeting but declined to comment for the Financial Times story. Given the Chinese use of AI to undermine U.S. democracy and increase the efficiency of the Chinese surveillance state, it is unclear if either side actually thinks that negotiations will be fruitful or if there are unstated ulterior motives.
In a piece highlighting the differences between the US and China this week, The Guardian stated:
In the summer of 2023, Chinese authorities proposed rules for generative AI, mandating that AI-generated content, whether images or text, must align with the “core values of socialism” and must not undermine state authority, harm national unity, or spread false information. AI service providers were also mandated to prevent users from becoming too dependent on their services.
While there is some broad idealistic overlap, there are some very stark differences between the passage above and the Executive Order on AI. What China’s 2023 proposal left out was not only that AI-generated content must not undermine state authority, but it must undermine other countries’ systems. Perhaps the confusion is in the definition of “false information.”
With 83 elections this year, China, along with Russia, is leading the way in driving deepfake influence in the world’s democratic processes. Slovakia’s pro-Russian President may have gained enough from a deepfake scandal to get him across the finish line in last year’s election. The Prime Minister of the UK has been targeted with deepfakes as well, ironically coming at a time when the nation is marching in the opposite direction of the rest of the world in AI regulation, creating laws that would allow regulation only under certain circumstances.
In other news
Due to US export controls, Chinese companies are repurposing older Nvidia chips for use in AI development.
“Russia has compromised internet-connected webcams in Ukraine to conduct remote surveillance.” The future of warfare will rely on a meshed civil-military sensor network and tactics like this could have massive impacts on sensor inputs for AI systems, affecting decision-making in areas ranging as wide as traffic monitoring to target selection.
The EU joined the UK and the US in investigating whether the OpenAI-Microsoft relationship violates its merger rules.
Indemnity offers from Big Tech protecting their users seemed like uncharacteristic benevolence; however, most indemnity protections are narrow and restrict the use of third-party platforms for cloud coverage or models that other customers have fine-tuned.
An in-depth piece from JDSupra discusses forthcoming regulations in the EU, Canada, California, and U.S. administrative offices and recommends actions for companies looking to adopt AI. These recommendations are focused on privacy and mitigating third-party vendor risk.
The FTC warned AI companies about their obligation to protect users’ data and privacy.
NIST is hosting a workshop on Wednesday, January 17, 2024, from 9:00 AM - 1:00 PM EST to bring together industry, academia, and government to discuss secure software development practices for AI models.
Novel Policy Ideas
Three novel policy ideas before you go:
Global AI Tax
Without intervention, the next chapter of the technological revolution risks once again privatising profits while pushing the costs of mitigating its harms on to the public. Paying for welfare and reskilling laid-off workers are not just economic downsides: they signal the kinds of societal shifts that easily lead to political unrest. For generations, work has been the foundation not just of family income but also of people’s routine and sense of purpose. Try imagining what you would do without your job. To rebalance the cost-benefit impacts of AI in favour of society — as well as to make sure the necessary response is affordable at all — taxing AI companies is the only logical step.
From the Georgetown Law Technology Review Winter 2024 issue:
Addressing the gap between legal precedent and machine learning: Limitations of the “Four-Fifths Rule” and Statistical Parity Tests for Measuring Fairness
To ensure the fairness of algorithmic decision systems, such as employment selection tools, computer scientists and practitioners often refer to the so-called “four-fifths rule” as a measure of a tool’s compliance with anti-discrimination law. This reliance is problematic because the “rule” is in fact not a legal rule for establishing discrimination, and it offers a crude test that will often be over- and under-inclusive in identifying practices that warrant further scrutiny… When these tests are used prospectively as an optimization objective shaping model development, additional concerns arise about the development process, behavioral incentives, and gameability.
Digital Access to Justice: Automating Court Fee Waivers in Oklahoma
This Note discusses a novel, technology-enabled approach to address the issue of people being sent to jail for unpaid court debt in the United States. In 2016, the state of Oklahoma passed a law that makes formerly incarcerated people eligible for a waiver of their outstanding court fees if they comply with all probation requirements and make payments on their court debt for twenty-four consecutive months following release. However, awareness of this mechanism for debt relief is low among debtors, defense attorneys, and judges, and the process to determine eligibility is technical and complicated. This leads to low waiver uptake and high levels of outstanding debt that hold formerly incarcerated people back from building savings and fully ending their entanglement with the criminal legal system. Our team worked in partnership with Legal Aid Services of Oklahoma to explore the potential of digital tools to increase access to these court debt waivers.
Always an excellent read, Kelly!