Inside the Covered Wagon: Regulating Autonomous Weapons
The first installment of an in-depth look at regulating autonomous warfare
This is the first installment of a three-part series on regulating autonomous weapons. PT II explores how to hold a human accountable for war crimes committed by an autonomous weapon. PT III challenges the notion of human control.
Pioneering Oversight is hitting the trail again for the next two weeks, so while the Weekly Roundup will be paused, I will begin a new regular section called "Inside the Covered Wagon." This section of Pioneering Oversight will spend time digging into more nuanced regulatory issues. The first subject will be autonomous weapons; this installment will be a broad overview of the issue.
For this issue, I draw heavily from my graduate thesis, Lethal Autonomous Weapons Systems: Morality, Obligations, and Responsibility in the Law of War. The significant topics I cover in the thesis, and will begin to cover in this post and subsequent ones, are:
Do autonomous weapons have moral agency?
Can they carry out legal obligations in war?
What are the shortcomings of the current law of war regime?
What happens if AI commits a war crime?
What are the pragmatic concerns of introducing autonomous weapons onto the battlefield?
The future is already here…
AI in warfare is a broad topic. Often called "algorithmic warfare," a wide variety of tasks, from logistics optimization to intelligence analysis to killing, can be automated in some capacity. What I will be covering are Lethal Autonomous Weapons Systems or LAWS.
Depending on who you are talking to, LAWS or any other AI-driven kinetic device that can kill conjures up different images. There is no global or official term to describe AI in warfare, so it is best to get specific about what systems are being discussed. Killer robots present different legal and regulatory issues than an automated munitions selection system. Here are a couple of helpful definitions when talking about lethal and autonomous weapons systems:
A weapon system that, once activated, can select and engage targets without further intervention by an operator. This includes, but is not limited to, operator-supervised autonomous weapon systems that are designed to allow operators to override operation of the weapon system, but can select and engage targets without further operator input after activation.
-United States Department of Defense (DoD)
[A weapon system] capable of understanding higher-level intent and direction. From this understanding and its perception of its environment, such a system can take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present. Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be.
What weapons exist today?
The New York Times published a brief synopsis of the history of weapons that do not require humans to activate and recent AI weapons development. Available weapons range from automated sentry guns, target selection programs, missile selection, and flying unmanned aircraft. Drone swarms, automated or semi-automated attritable drones used to overwhelm enemy forces, are the most probable next iteration of automated warfare.
Many private sector companies (Anduril, Palantir, Saildrone, and Sheild AI to name just a few) are developing versions of autonomous weapons systems. Like most emerging technological developments, some of these technologies are dual-use. The U.S. Navy uses maritime drones with some level of automation that can be fitted with weapons or with sonar, like the drone that discovered an underwater mountain off the coast of San Francisco. Although drones are used frequently in discussing autonomous warfare, they are not synonymous. No matter which definition you use, “taking action independent of a human” is a requirement for automation, something not all drones do.
While automating warfare isn’t anything new (depending on your definition of “automated,” this technology has existed for the last 50-100+ years), ethical aspects about the future of war worry many. Most countries are concerned about the prospect of a machine making a decision to kill a human. The moral aspect of using LAWS will be a future installment of this series.
…The law, maybe not
War is governed by a series of treaties and customary law (rules originating from a general practice of law). The International Committee of the Red Cross (ICRC) has a portal with an introduction to international law primers and detailed analyses on whatever aspect of law and conflict you want to know more about.
The two major bodies of law regulating autonomous weapons are Human Rights Law and International Humanitarian Law (IHL). From the ICRC:
Human Rights Law is a set of international rules established by treaty or custom, on the basis of which individuals and groups can expect and/or claim certain rights that must be respected and protected by their States.
IHL is a set of rules that seeks, for humanitarian reasons, to limit the effects of armed conflict. It protects persons who are not, or are no longer, directly or actively participating in hostilities, and imposes limits on the means and methods of warfare. IHL is also known as "the law of war" or "the law of armed conflict".
Some important facts to know about both bodies of law:
While IHL applies exclusively in armed conflict, human rights law applies, in principle, at all times, i.e. in peacetime and during armed conflict.
IHL binds all parties to an armed conflict and thus establishes an equality of rights and obligations between the State and the non-State side for the benefit of everyone who may be affected by their conduct (an essentially 'horizontal' relationship).
Human rights law explicitly governs the relationship between a State and persons who are on its territory and/or subject to its jurisdiction (an essentially 'vertical' relationship), laying out the obligations of States vis à vis individuals across a wide spectrum of conduct.
Unlike IHL, some human rights treaties permit governments to derogate from certain obligations during public emergencies that threaten the life of the nation. Certain human rights can never be derogated from: among them, the right to life, the prohibition against torture or cruel, inhuman or degrading treatment or punishment, the prohibition against slavery and servitude and the prohibition against retroactive criminal laws.
The emphasis on that last bullet point is mine and important to note, along with the fundamental differences between the two bodies of law. There is debate about when IHL begins and when Human Rights Law ends. For example, “Does IHL cover the ‘Global War on Terror’” or “At what point is a conflict no longer a war, but a policing action?” Questions like these make regulating autonomous weapons very difficult on the international stage. Using autonomous weapons lethally outside of an armed conflict could be an illegal deprivation of the right to life. Using the weapons to the same effect against a combatant in an international armed conflict, not necessarily.
Automating complex, legally ambiguous, and debatable decisions is problematic for many studying this issue. One of the fallouts of deploying LAWS in situations where the law is so unsettled is that these weapons may end up pushing customary law in a direction humans don’t want it to go. This scenario would be truly unprecedented.
A weapons journey through regulation
Develop it
Focusing strictly on international armed conflict and IHL, what current mechanisms could regulate these weapons? First off are internal state policies. We often forget about these when discussing international law, but if states have laws restricting the deployment or development of autonomous weapons, this is a type of regulation. If many states do (and follow them in practice), it could be argued that there is a customary law that regulates LAWS. Some state-policy examples regulating autonomous weapons from The United States:
DoD Directive 3000.09 on Autonomy in Weapons Systems. This directive sets policy and responsibility for using and developing autonomous weapons systems.
The State Department's Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. While an externally facing document, it can be viewed as an indicator of where the administration stands and how it will drive domestic policy.
Other places to look are doctrine publications, speeches, and essays by military commanders on how or if the United States would use autonomous weapons in combat.
Test it
When any new weapon is developed, it must go through testing to ensure the weapon is legal in the first place. Another definition from the ICRC:
Article 36 of the Additional Protocol I to the Geneva Conventions (AP I) states that each State Party is required to determine whether the employment of a new weapon, mean or method of warfare that it studies, develops, acquires or adopts would, in some or all circumstances, be prohibited by international law.
The Stockholm International Peace Research Institute (SIPRI) publishes outstanding research on emerging technology and international law. The below explanation comes from a publication discussing the problems with weapons testing and emerging technology.
As a general rule, international humanitarian law prohibits the use of weapons, means and methods of warfare that cause superfluous injury or unnecessary suffer-ing, or damage military objectives and civilians or civilian objects without distinction. There are also a number of rules under treaty and customary law that ban specific types of weapons (e.g. biological and chemical weapons or blinding laser weapons) or restrict the way in which they are used, such as the 1907 Convention Relative to the Laying of Automatic Submarine Contact Mines. These restrictions and prohibitions are intended to set minimum standards of humanity during armed conflicts.
The problems with weapons testing are somewhat intuitive. It can be challenging to get weapons testing data from countries. Why would anyone turn over such vital data to national security? Emerging technology moves fast, sometimes too fast for proper testing, with standards that are not formulaic.
Testing artificial intelligence has its own challenges that I have outlined throughout this Substack. What are the standards? Do you measure output? Input data? What do you measure data for? Diversity? Accuracy? For LAWS, is accuracy determined by precision kills or the number of lives it will save for the deploying military?
Deploy it
Assuming that the autonomous weapons are legal per se and fulfill Article 36, IHL still has to meet legal requirements for use. Citing Articles 50, 51, and 57 from Additional Protocol I:
Under IHL, certain obligations must be met for a killing to be lawful. These obligations are cumulative and are:
Distinction between combatants and non-combatants.
Protection of civilians from indiscriminate attacks.
The necessity to take feasible precautions to avoid civilian losses.
The proportionality of an attack.
Generally, there are two opinions on whether or not LAWS can accomplish these four requirements.1 The first is that the technology is not at the level needed to execute these obligations; therefore, using them would be illegal. The other opinion is that AI can accomplish these obligations under the correct circumstances. For example, in a battle where civilians are not present, the principle of distinction may be a non-issue. One argument is that if the technology is as good as humans are at accomplishing these obligations, then that should be sufficient to deem them legal.
These obligations should not be taken lightly. Distinction is a major part of the law of war and a massive technological hurdle to overcome for a “killer robot” to be used legally. Autonomous weapons could fail in a manner that indiscriminately attacks civilians at a magnitude other weapons systems would never be able to reach. This failure leads to questions for number 3.
Convict it?
The last regulatory regime is the international justice system. Should a war crime be committed and, autonomous weapons play a role, and an arrest made following a trial, a judge could rule on the use of autonomous weapons. The major problem is that weapons are not legal persons, meaning you can't sue, arrest, fine, or jail an autonomous weapon that commits a war crime. The next installment of this series will walk through the different legal possibilities and people that could be held responsible should an autonomous weapon commit a war crime.
There are plenty of nuances, gradations, and offshoots of these two camps I outlined. However, the macro-level debate on whether LAWS are legal generally aligns somewhere within these two opinions.