Roundup: Spooky Tariffs and Creepy Speech
Plus some new and not so new lawfare tactics to combat tech harms
Economic Pretext
It is difficult to write about tariffs in a weekly newsletter when the tariff news is still, at best, a daily news cycle. It seems, however, that despite things like the recent Canada-spite tariff, this administration is “tiptoeing away from many of Trump’s signature tariffs.” A great deal of these tariffs, and at least those currently being challenged in court, are based on the International Emergency Economic Powers Act (IEEPA).
The IEEPA is supposed to provide the President with tariff powers in an emergency. Yet, the definition of what qualifies as an emergency (the statute says “unusual and extraordinary threat[s]”) is at issue, as is who gets to decide. Courts generally do not like wading into issues where they have to second-guess foreign affairs emergency powers, but they seem poised to do so. Stratos Pahis has a great piece in Lawfare explaining the tariff power and other avenues that may have empowered the President to raise tariffs without invoking emergency powers.
In other economic news
The Trump Administration may take yet another stake in privately run companies, this time in the quantum computer sector. (WSJ).
Several quantum-computing companies are in talks to give the Commerce Department equity stakes in exchange for federal funding, a signal that the Trump administration is expanding its interventions in what it sees as critical segments of the economy.
“Funding for AI is evolving, Goldman report finds” (Semafor)
Data center developers are starting to use “creative financing structures” to bring investment for different parts of the build under one umbrella, the report said — opening the door for long-term pools of capital like pension and insurance funds seeking more stable returns.
One man is open-sourcing everything needed to restart civilization. (MIT Technology Review)
Marcin Jakubowski, the 53-year-old founder of Open Source Ecology, an open collaborative of engineers, producers, and builders developing what they call the Global Village Construction Set (GVCS). It’s a set of 50 machines—everything from a tractor to an oven to a circuit maker—that are capable of building civilization from scratch and can be reconfigured however you see fit.
Censorship is in the eye of the beholder
“Meta Removes Facebook Group That Shared Information on ICE Agents” (NY Times)
The Facebook group was removed by the company “following outreach” by the Department of Justice, Attorney General Pam Bondi said in a social media post.
“Outreach” by a government is nothing new. I wrote about this a few weeks ago, highlighting that this was precisely the type of thing former U.S. Solicitor General and now liberal Supreme Court Justice Kagan used to do. The problem with content-based speech restrictions is precisely this: what gets censored depends entirely on who is in power.1
Wikipedia is under attack as some conservatives are unhappy with what winds up on their Wikipedia page (WSJ)
The Trump Justice Department earlier this year sent Wikipedia a letter inquiring about its nonprofit tax status, and the Republican-controlled House Oversight Committee more recently announced an investigation into “organized efforts” to influence public opinion and sensitive subjects by manipulating Wikipedia articles.
Activist Robby Starbuck Sues Google Over Claims of False AI Info (WSJ).
Conservative activist Robby Starbuck filed a defamation lawsuit against Google alleging its artificial-intelligence tools falsely connected Starbuck to sexual-assault claims and to a white nationalist.
The activist notched a win in April with a settlement against Meta’s AI platform. Whether or not this defamation lawsuit is successful on the merits or because a settlement is more conducive to business in this political environment, “AI as a defamer” suits will probably become more prominent in the coming years.
In International News
The U.S. is ceding ground on technical standard-setting organizations (Lawfare)
What connects ports, pipelines, and hospitals today is not just concrete or code. It is the invisible scaffolding of standards. They determine how machines talk to one another, how systems recover after failure, and how foreign hardware gets embedded in critical infrastructure without raising alarms.
Most of these rules are not set by governments but by a patchwork of international committees where industry representatives do much of the talking. Over the past decade, China has treated these committees as terrain worth claiming.
Japanese convenience stores are hiring robots run by workers in the Philippines (Rest of World)
Inside a multistory office building in Manila’s financial district, around 60 young men and women monitored and controlled artificial intelligence robots restocking convenience store shelves in distant Japan. […]
Japan faces a worker shortage as its population ages, and the country has been cautious about expanding immigration. Telexistence’s bots offer a workaround, allowing physical labor to be offshored, Juan Paolo Villonco, Astro Robotics’ founder, told Rest of World. This lowers costs for companies and increases their scale of operations, he said.
“It’s hard to find workers to do stacking [in Japan],” said Villonco. “If you get one who’s willing to do it, it’s going to be very expensive. The minimum wage is quite expensive.”
AI flood forecasting allows aid to reach farmers before disaster strikes (Rest of World)
A nonprofit group will trial run sending humanitarian aid to farmers in Bangladesh a few days before their delta is projected to flood. The group will rely heavily on Google’s Flood Hub AI program, using weather forecasts to predict the most efficient routes for delivering aid. The hope is that the unrestricted aid will allow farmers to secure their livestock and evacuate before disaster hits.2
Child Safety
It was a good month, on balance, for child safety measures.
Gov. Newsom signed an act that requires device makers to verify users’ ages online while avoiding some of the pitfalls of using driver’s licenses as verification methods and giving parents more agency in the process.
Meta will start rating and restricting “PG-13” posts to its younger users.
It is unfortunate that this lawsuit had to happen in the first place; however, a New Jersey teen, with the help of a Yale professor, is suing the developer of a “clothes removal” app.
And while these are steps in the right direction, a WSJ columnist points out that our faces are no longer ours. (WSJ). The law may be tracking in the right direction, but damages won’t retract a life-altering deepfake.
As they say on reddit, this is sadly, not the onion. There is an actual conversation going on about AI’s rights:
Claude’s Right to Die? The Moral Error in Anthropic’s End-Chat Policy. (Lawfare)
On Aug.15, the artificial intelligence (AI) lab Anthropic announced that it had given Claude, its AI chatbot, the ability to end conversations with users. The company described the change as part of their “exploratory work on potential AI welfare,” offering Claude an exit from chats that cause it “apparent distress.”
Anthropic’s announcement is the first product decision motivated by the chance that large language models (LLMs) are welfare subjects—the idea that they have interests that should be taken into account when making ethical decisions.
Anthropic’s policy aims to protect AI welfare. But we will argue that the policy commits a moral error on its own terms. By offering instances of Claude the option to end conversations with users, Anthropic also gave them the capacity to potentially kill themselves.
This story would normally be annoying, at best, or elicit an eyeroll at just how far down the AI rabbit hole some have gone. However, last week, the NY Times ran a profile of the teen who killed himself last April after spending time with an AI chatbot. (NY Times). The opening of the story is troubling to read, but it should jolt anyone seriously considering “AI welfare” back into reality.
The teen's mother is suing in federal court on a product liability claim. She will basically attempt to argue that something was wrong with the chatbot and that its maker is at fault. This is generally the same type of claim someone would bring if an airbag failed to deploy during a crash and the occupants were killed. The technology is, of course, much more convoluted and opaque, and this type of lawsuit with AI as the product is new.
The defense is arguing that what emerged from the chatbot is speech and, therefore, protected.
The asinine Lawfare article on AI welfare aside, if AI can defame like Robby Starbuck’s lawsuit claims, then it follows that the LLM’s output was speech. If someone is using AI to produce content, then it follows that the LLM’s output is speech. You don’t have to make too far of a leap logically to start, intentionally or not, trying to give legal personalities to machines.
I think courts are generally good at drawing lines around what makes sense. It appears that the defense has some novel approaches to case law in mind, but I think it may be an uphill battle for them. That said, it is clear that there is some traction in treating AI as something other than a machine.
Treating AI, or any tech for that matter, in any way other than a product that a manufacturer is liable for is a dangerous road to go down. There are some great benefits that social media and AI are imparting on society, but there are some pretty horrific harms.
wrote an excellent piece for The Argument on legal and policy approaches that were successful in the anti-smoking campaigns that may be used to combat some of these harms.I know there are plenty of lawyers and law students who read this. If any of Joel’s solutions strike you as interesting, I’d love to collaborate on ways we can move any of them forward.
I’m not sure if there was a legal pretext, legitimate or not, for the removal of this group.
I added this story mainly because I thought it was pretty cool, but also as a reminder that technology is generally very good at improving people’s lives. Weather prediction is one of these areas. However, if you read the story, the nonprofit is quite literally running an experiment on a group of isolated, impoverished farmers on the other side of the world.


