Inside The Covered Wagon: Regulating Autonomous Weapons PT II
What would it take to convict a human of a war crime perpetrated by an autonomous weapon?
This is PT II of a series taking an in-depth look at regulating autonomous weapons. Part I was a primer on the weapons systems and existing regulatory regimes. PT III challenges the notion of human control.
Autonomous crime against humanity
Imagine, during an International Armed Conflict, a military pilot of the rank of Lieutenant Colonel is sitting in the passenger seat of a plane flown by AI and is in command of an autonomous drone squadron flying alongside him. They are conducting a routine mission near a town of moderate size. Their primary mission is to provide early warning and air defense of their base by commanding the autonomous drone squadron. (Early warning and air defense is accomplished by identifying, evaluating, and intercepting (if necessary) inbound air contacts).
The lethal autonomous drone squadron stays within the vicinity of the pilot’s plane unless tasked by the pilot to go elsewhere. The AI squadron has three modes:
Non-autonomous. In this mode, the drones will fly in patterns around the pilot, only responding to a command given by the pilot, not external stimulus. They will not react even if fired upon.
Semi-autonomous. The drones will patrol in patterns and intercept targets identified as hostile by a human but will not identify targets without human approval or intercept non-hostile or unlabeled targets.
Fully autonomous. The human pilot can only negate the AI’s actions. The drones will assess and neutralize any threats in the air or on the ground, entirely independent of human control.
The pilot takes off in the early morning with the drone squadron and reaches their patrol point a few hours later. The human pilot sets his non-human AI squadron in “semi-autonomous” mode and begins the patrol. As the mission progresses, a sudden increase in radar contacts coincides with an intelligence report from base that the enemy is launching an air attack alongside ground-supported air defense batteries—these two developments cause the pilot to become overwhelmed with hostile contacts, and he fears being shot down. In response to a sudden and drastic increase in enemy forces, the pilot sets the AI squadron to “fully autonomous” to deal with incoming threats.
The AI squadron identifies and evaluates humans on the ground, classifies them as a threat, and fires on them. These turn out to be primarily non-combatants. The strike injures 130–140 civilians and kills 14 civilians in an attempt to suppress enemy air defense.1 The pilot and squadron return to base, and charges are considered for the illegal act of the indiscriminate targeting of civilians.
Courts as regulatory bodies
The court system is a significant regulatory player. Often, this is through civil liability, but individual criminal liability can act as a deterrent and bolster the effect of regulations. In the law of war, individual criminal liability is one mechanism to ensure compliance. The opening scenario describes a lethal autonomous weapons system (LAWS) independent of human control perpetrating a war crime. In this scenario, who (or what) should be held accountable? Is it the lethal autonomous drone squadron? The human pilot? Can humans even be criminally liable for the actions of autonomous weapons?
The position of this blog is unequivocally yes. Since LAWS are tools with no moral or legal agency and are not legal persons, they cannot bear responsibility for their actions. If you accept this premise, two outcomes are possible: either no one is responsible (a “legal black hole” position I do not accept), or the human is. Holding a human directly accountable for the war crime detailed above in the same manner you would hold them accountable for committing the same act with a rifle, however, is problematic. Autonomous weapons are trained and deployed to make decisions independent of human control and can identify, target, and kill all without any human oversight. Autonomous weapons are in a much different category than rifles. This raises a core tension between AI and criminal law—the perpetrator is an object.
Legalese
Before we go any further, here is a brief background on criminal liability.
For a person to be criminally liable two requirements must be met: There must be an Actus reus and Mens Rea.
Actus reus is the physical act of the crime. “For example, if a thief shoves a gun into the side of a victim and says: “Your money or your life” - the shoving of the gun is the actus reus.”
Mens Rea is the “mental element” or intent. Different levels of intent can be attached to a crime—think of the difference between pre-meditated murder and negligent manslaughter.
In our scenario, the actus reus is simple: it is the indiscriminate targeting of civilians. But who perpetrated it? It was the LAWS that identified and then fired upon the civilians, a decision made independently from a human. But, if the weapon system cannot be held responsible, the only other option is the pilot. Yet, the pilot wasn’t the direct perpetrator—they didn’t order the attack but only switched the AI’s operating mode, essentially turning it on.
Suppose we are to hold the pilot criminally liable. In that case, we have to prove that either the pilot wanted the war crime to happen or that the pilot knew that the consequence of the war crime would occur in the ordinary course of events.
The mental element
We will assume that our human pilot did not want to intentionally commit a war crime. This eliminates some of the more extreme levels of mens rea (the pre-meditated murder ones). If these are ruled out, proving the other gradations of intent relies on showing that the pilot had a certain level of knowledge to foresee possible consequences of deploying the weapon. Article 30, section 2 of the Rome Statute, discusses the “mental element.”
(b) In relation to a consequence, that person means to cause that consequence or is aware that it will occur in the ordinary course of events.
3. For the purposes of this article, "knowledge" means awareness that a circumstance exists or a consequence will occur in the ordinary course of events.
To establish the pilot’s mens rea or intent, two pieces of information we will need are:
The knowledge of civilians in the area. Given the pilot’s rank and knowledge of mission briefings, we can assume they had full knowledge that there was a high probability, if not certainty, of non-combatants in the area.
The knowledge of, and if so, to what extent of the consequence that the autonomous drone squadron would kill non-combatants.
The second bullet will be the key to attaching a level of intent. For starters, what areas of AI knowledge would the pilot need to be aware of the consequences of indiscriminate targeting of civilians? A few, but certainly not all, examples:
Training data. Was the training data biased? Was it trained on systems and terrain found in the southeast United States but deployed to the Middle East?
Operational limitations. Understanding the real-time sensor input data and the real-world circumstances vis-à-vis the training data and knowing the capabilities of the sensors providing input to the machine learning system would be basic knowledge similar to what all military personnel are given before deploying with a specialized weapons system.
Accuracy rates. How accurately does the system identify targets? Does it perform better under certain environmental circumstances? If multiple sensors are used to identify a target, how does all the information interact?
Accuracy extends beyond identification to things like munition selection. For example, would the AI select a JDAM to take out a single person on a crowded street?
Models and parameters. Knowing and understanding what types of models are used, for what types of input data, and their applied effects. Is this information available to the pilot, or was it withheld because it is proprietary?
If the pilot had knowledge in these areas, how much knowledge would they need? The answer to this question will dictate which level of mens rea can attach to the pilot.
The highest level of intent we can plausibly attach to the pilot is Oblique Intent. For oblique intent to apply, the pilot would have to have “certain,” “highly probable,” or “virtual certainty” knowledge about the consequences. This places the knowledge threshold relatively high for the pilot. Given the pilot’s rank, experience, and the fact that they were nominally in control of this squadron, it is foreseeable that the pilot could be seen to have oblique intent, although their training would have to be quite intensive for them to be able to apply machine-learning concepts under fire.
A level of intent with a lower knowledge threshold is Recklessness. This level means that the pilot is not certain that the civilians will be killed but is aware of the risk that it may occur; think of it as “conscious risk taking.” Knowledge of accuracy rates alone may make the pilot aware of the risk. For example, if the pilot had known that accuracy rates declined when the number of contacts increased, the pilot would have understood the danger of misclassification when placing the weapon into fully autonomous mode.
Modes of Liability
After establishing the criminal act (indiscriminate targeting of civilians) and that a level of mens rea could reasonably be applied to the pilot, we are still left with the “object-as-perpetrator” problem. When a person did not perpetrate the crime but was still involved in the crime and therefore is criminally liable, the way liability is attached to that person in these circumstances is through modes of criminal liability.2
The modes of liability in International Criminal Law come off as pretty silly if you try to apply them to our scenario. Aiding and abetting, for example, would require that a third party give practical assistance to the perpetrator. In our case, we would need to read “practical assistance” as deploying LAWS or placing LAWS in a specific autonomous setting, i.e., turning it on. Joint Criminal Enterprise (JCE) requires “the existence of a common plan, design, or purpose which amounts to or involves the commission of a crime.” So, the LAWS and the humans would have needed to have hatched a plan to commit the war crime together before taking off on patrol.
Perhaps there are other actions the pilot could take that would aid and abet the LAWS, but the problem here, as with JCE and other modes of liability, is that it gives an awful lot of agency to the weapons system and removes the responsibility from the human.
Command Responsibility
Command responsibility is a mode of liability that requires that a military commander have control over their subordinates, and if they have (or should have) knowledge of their subordinates’ crimes and fail to prevent or punish these crimes, then the crimes are a result of the superior officer’s failings: the commander is liable. In Article 28 of the Rome Statute, the knowledge requirement reads:
That military commander or person either knew or, owing to the circumstances at the time, should have known that the forces were committing or about to commit such crimes.
This is an attractive solution for AI weapons. The parallels between troops who are independent, thinking subordinates that a commander is not in physical control of and our autonomous drone squadron that makes its own decisions independent from our pilot based on previous training is logical. There is no “assistance” requirement like aiding and abetting, but only a knowledge requirement is attached entirely because of the commander’s position.
Where command responsibility falls apart is that it is expected that the commander has “authority and control.” The whole point of autonomous weapons is that they can operate independently of a human controller. With humans, a commander can punish a subordinate, which also works as a deterrent. No such tool is available to “punish” a LAWS and deter insubordination. Effective control is also shaky because of the speed and complexity of the weapon systems.
Top-echelon problems
The problems with command responsibility are the problems with autonomous weapons. The knowledge required to operate them will either be prohibitively high and make the weapons illegal to use, or they will be used with increased risk. There is no recourse for a weapon malfunction, and the cost of malfunctions can be high. Some of that risk will fall on civilians. Some of it will fall on combatants who “command” the autonomous systems. These, of course, are all strong arguments about why autonomous weapons shouldn’t be used in the first place.
In the final installment of this series, I will lay out some possible solutions to the International Humanitarian Law shortcomings I outlined last week and the failures in criminal law I discussed this week. I will explore to what degree humans can “control” AI in situations like our scenario and discuss some moral implications for societies that deploy autonomous weapons.
Both the NATO bombing of Albanian refugees near Gjakova and the shelling of a soccer stadium in Dobrinja were drawn on for facts in this scenario to make it as realistic as possible.
Different national and international laws have varying definitions and nuances of all the legal principles discussed in this post. I have cast aside differences between the ICC, other tribunals, and common and civil law courts when discussing mens rea standards and other elements of crime. Acknowledging that these standards are important in prosecution, my primary purpose is to air the shortcomings in general principles surrounding modes of liability, knowledge, and mens rea.