Inside The Covered Wagon: Regulating Autonomous Weapons PT III
Challenging the prospect of meaningful control
This is the final installment of a three-part series on regulating autonomous weapons. Part I is a primer on the weapons systems and existing regulatory regimes. PT II explores how to hold a human accountable for war crimes committed by an autonomous weapon.
The state of affairs
This series has focused on so-called “killer robots,” or Lethal Autonomous Weapons Systems (LAWS), that can make a decision independent of human oversight to kill in combat—something that is no longer science fiction:
The Pentagon is testing AI-enabled drones with increasing levels of autonomy in all domains.
U.S. military officers are working through frameworks to increase trust between humans and machines, which is a significant problem when humans are familiar with automated tasks.
However, the risk of getting subsumed in AI systems that control the entire targeting cycle or chain of command is that it ignores other AI applications in warfare.
Cyberwarfare is expected to be an AI v. AI domain in the near future.
Israel Defense Forces have used AI in target selection and incorporated the data into an app-based command and control system that can be carried on a phone.
The war in Ukraine is a testing ground for AI in warfare:
Palantir’s software, which uses AI to analyze satellite imagery, open-source data, drone footage, and reports from the ground to present commanders with military options, is “responsible for most of the targeting in Ukraine,” according to Karp. Ukrainian officials told me they are using the company’s data analytics for projects that go far beyond battlefield intelligence, including collecting evidence of war crimes, clearing land mines, resettling displaced refugees, and rooting out corruption.
These examples illustrate that algorithmic warfare extends much further beyond the robo-soldier. Decisions made or assisted by AI can be rooted in strategic decision-making and support functions far away in space and time from the front lines and trigger-pulling infantry.
Centaurs or cyborgs?
The example I laid out in Part II of this series presents an AI-human team in a combat scenario. This scenario is helpful for working through legal questions of criminal responsibility. It is, however, myopic in its presentation of the human pilot and AI-driven drone swarm as two separate systems. Much like a human rider sits atop a horse, both have their own motivations and agency. Even though the rider may control the horse by reigns, he can still be thrown, should the horse put in the effort.
Ethan Mollick describes the AI-human task relationship in the workforce as a “jagged frontier” and coins the Centaur and Cyborg Model to describe how humans and machines interact.
The centaur
Centaur work has a clear line between person and machine, like the clear line between the human torso and horse body of the mythical centaur. Centaurs have a strategic division of labor, switching between AI and human tasks, allocating responsibilities based on the strengths and capabilities of each entity.
When I am doing an analysis with the help of AI, I often approach it as a Centaur. I will decide on what statistical techniques to do, but then let the AI handle producing graphs.
The cyborg
On the other hand, Cyborgs blend machine and person, integrating the two deeply. Cyborgs don't just delegate tasks; they intertwine their efforts with AI, moving back and forth over the jagged frontier.
Bits of tasks get handed to the AI, such as initiating a sentence for the AI to complete, so that Cyborgs find themselves working in tandem with the AI. This is how I suggest approaching using AI for writing, for example.
The horse rider
Perhaps the consequences for individual criminal responsibility are not as grave in the centaur/cyborg model as in the bifurcated “horse rider” example I laid out in part II. A human interacting with AI, to whatever extent, still makes that human the perpetrator of the crime. This is a much different proposition than if an independent lethal AI system perpetrated the act without human input.
Centaurs and cyborgs in warfare do present several other challenges. Using AI in high-level strategy can drive humans down paths they might not otherwise take. In a study using off-the-shelf LLMs in simulated wargames, researchers found that “models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons” with “worrying justifications based on deterrence and first-strike tactics.”
Most AI regulatory regimes deal with risks associated with the capabilities of technology (it doesn’t do what it should) and the use of technology (it shouldn’t be used in this way). The intertwining of humans and technology presents another set of risks, however, such as technical deskilling or warping moral judgments. The degree to which these risks are problematic and to what degree they can be mitigated through controlling measures is controversial, but they exist nonetheless.
How much control?
Regulating AI rests on a significant (and easily challenged) assumption: AI can be effectively controlled.
The earliest manual I found on controlling automation in warfare is from the U.S. Army, dated July 1986, “Guidelines for Automation: A How-To Manual for Units Receiving Automated Command and Control Systems.” The manual, mostly dealing with data and communications, takes an institutional approach: safeguards and checks can be implemented within existing institutions. The DOD’s 2023 Data, Analytics, and Artificial Intelligence Adoption Strategy adopts a similar approach, highlighting an additional level of transparency.
Institutional safeguards (think double verification, command oversight, and extensive training) can get ahead of a lot of problems if implemented effectively. In order to have institutional safeguards, however, the institution must have trained and disciplined critical thinkers; even then, their control over AI may not be guaranteed.
Technical deskilling
This problem is sometimes termed “the paradox of automation”. An automated system can assist humans or even replace human judgment. But this means that humans may forget their skills or simply stop paying attention. When the computer needs human intervention, the humans may no longer be up to the job. Better automated systems mean these cases become rare and stranger, and humans even less likely to cope with them. -Financial Times, Of top-notch algorithms and zoned-out humans
The U.S. Army is working through the problem of technical deskilling in land navigation. An entire generation of soldiers is now joining the ranks that have always had the convenience of GPS in their pocket. Never having to had flip through a Thomas Guide shoved under the back seat of their car, teaching soldiers analouge land navigation for communication-strained environments is a much more significant challenge than a generation ago.
A more harrowing example of technical deskilling is that of Flight 447, which crashed over the Atlantic in 2009.
In the case of Flight 447, the challenge was a storm that blocked the airspeed instruments with ice. The system correctly concluded it was flying on unreliable data and, as programmed, handed full control to the pilot. Alas, the young pilot was not used to flying in thin, turbulent air without the computer’s supervision and began to make mistakes. As the plane wobbled alarmingly, he climbed out of instinct and stalled the plane — something that would have been impossible if the assistive fly-by-wire had been operating normally. The other pilots became so confused and distrustful of the plane’s instruments, that they were unable to diagnose the easily remedied problem until it was too late.
This tragedy is similar to the Boeing 737 MAX crashes, where the technology, among other things, was not regulated adequately because the FAA did not understand it. What skills the military and society as a whole are willing to deskill is a strategic decision with military implications, as much as a convenience-based one.
Moral deskilling
“When human atoms are knit into an organisation in which they are used, not in their full rights as responsibly human beings, but as cogs and levers and rods, it matters little that their raw material is flesh and blood,” for the human and the human world is increasingly shaped along the lines of a machine logic. -Norbert Wiener, The Human Use of Human Beings
Like technical deskilling, the risk of overreliance on AI could also lead to moral deskilling. This concept, written extensively on by Elke Schwarz (whose piece I pulled the above quotation from), states that humans can never truly control AI. Her stance is best articulated by quoting Ms. Schwarz herself in her conclusion.
And so, the ultimate question: to what degree are we able to act as moral agents in the use of lethal autonomous intelligent weapons? If we cannot readily understand or predict how intelligent LAWS might interact not only with the contingent, dynamic environment of warfare but also with our human capabilities and limitations, if we are unable to intervene in a timely manner, if we are unable to challenge an algorithmic decision on its technological authority, is it possible to retain the level of human control required for a morally meaningful decision? I am doubtful.
This is not to say that the notion of meaningful human control, as a legal concept, is of no use…With this article, however, I aim to push further and highlight where the limits to human control over such intelligent technologies reside. Conditions that would safeguard or indeed promote human moral agency cannot be ensured with LAWS. On the contrary, the capacity to take responsibility and feel the weight of a morally complex decision becomes more difficult as these decisions are distributed and mediated through technological interfaces, nodes, and various system components. The human is integrated as an .exe file into a technological ecology that is largely invisible, and which operates far beyond human capacities.
To give the time the moral deskilling argument deserves would far exceed the scale and scope of this post. While I do not dismiss the argument, and I believe the moral warnings should be at the forefront of everyone’s mind, I am not convinced that AI changes the moral calculus to such a degree that humans become a “.exe file.” Controlling an instrument, a tool such as AI, is not easy but also not implausible.
The state of political affairs
Advances in military technology can produce horrific humanitarian results, and these disasters often spur international law protections. The prohibitions on expanding bullets and the use of chemical weapons followed the horrors of colonial wars and WWI, respectively. Other advances in military technology have been blunted through policy, like the abstention of nuclear weapon use, which has succeeded thus far because of deterrence and aggressive non-proliferation measures taken by nuclear-armed countries.
AI differs from the above examples in that it is a dual-use, already widely proliferated technology with extremely low barriers to entry. There have been no horrors at the scale of WWI, making the pitfalls of the technology mostly theoretical. Coupled with what is essentially a fourth-industrial-revolution-driven arms race, any global regulatory effort in the current environment would probably go the way of the 1922 Washington Naval Treaty.
While it may be implausible to regulate LAWS via treaty, aggressive and consistent criminal prosecutions could instill left and right boundaries for LAWS use. The upside of criminal prosecution:
Enforces existing laws that prohibit crimes like indiscriminate targeting of civilians.
Establishes human accountability.
Reinforces human agency by sending a clear signal that authority, responsibility, and accountability fall on humans and not machines, nor in a “black hole.”
Sensational campaigns to ban LAWS, like “Stop Killer Robots,” galvanize people to advocate for open-ended regulation, however, they disregard the current political climate while ignoring the fact that warfare can’t be rushed. The doctrine for autonomous warfare is still anybody’s guess, just like in industrial warfare during and immediately after WWI. What the concrete prohibitions should be, short of a universal ban, are simply unknown—we have not seen algorithmic warfare at its maturity. We have barely seen it in its infancy.
In the meantime, domestic AI regulation among alliances with shared values toward human life can set the stage for later conventions and criminal liability regimes. Strategy, procurement, and battlefield practice in AI warfare will serve as evidence in shaping future treaties and practices and will help crystalize customary international law. Without countries taking an interest in domestic AI applications, however, there is no reason to think that when national security is thrown in the mix, the calculation will suddenly air on the side of caution.