Overall, AI has made a substantial difference in streamlining data analysis, reducing human error, and coping with electronic warfare, but it still requires human oversight–particularly for engagement decisions.
-Ukraine’s Future Vision and Current Capabilities for Waging AI-Enabled Autonomous Warfare
I cut my legal teeth, so to speak, on Lethal Autonomous Weapon Systems (LAWS) and International Humanitarian Law (IHL), AKA, the Law of War. The thesis I wrote on the subject, now approaching 3 years ago, and rehashed on this Substack, generally holds up, although there are a few things I’d like to change and update.
I think a few developments in the general AI/warfare space are worth writing about again. I believe these developments, coupled with the fracturing of the post-WWII order, mean that international legal enforcement will return to its roots, namely, reprisals.
Before we get going, I want to do my obligatory drone warfare ≠ LAWS, although they do go hand in hand now. I also again want to point out that AI and war go beyond the “killer robots” most of us think of. Cognitive warfare, influencing the way people think and make decisions, is being conducted primarily by the PRC, almost exclusively through social media platforms and algorithms.1 AGI also promises to solve plenty of war-related problems but doesn’t necessarily mean there will be terminators walking around the battlefield.
Ukraine developments
A Forward Horizon Group team came back from Ukraine a few weeks ago, and one of the takeaways when it comes to autonomous warfare is that the Ukrainians are doing a lot of extremely innovative things with drones; some of it is autonomous, but most of it is not. There are no “killer robots.” This takeaway is largely backed up by a Center for Strategic and International Studies report published earlier this month. If you are at all interested in this topic or want an up-to-date readout, the whole thing is a must-read, but here are some of the highlights of the executive summary:2
Upshot
The Ukrainian military’s objective is to remove warfighters from direct combat and replace them with autonomous unmanned systems.
Autonomy—defined by the U.S. military as a system’s ability to accomplish goals independently or with minimal supervision in complex and unpredictable environments—is not yet present on the battlefield in the war in Ukraine.
The adoption of drones equipped with AI-enabled autonomous navigation capabilities is driving a marked decrease in overall strike costs by minimizing both drone losses and repeated mission attempts.
Human oversight remains pivotal—particularly for engagement decisions—reflecting a human-in-the-loop approach that could shift toward higher-level supervision in the future while still maintaining human control of the system.
Localizing the [open source] weapon
The Ukrainian defense industry is pursuing an approach of training small AI models on small datasets rather than developing large, all-encompassing models.
Ukrainian engineers are increasingly leveraging open-source technologies and existing computer vision models to accelerate research and development while keeping costs low.
Training to operate unmanned systems equipped with autonomous features can now be completed in as little as 30 minutes to one day, substantially broadening access to these weapons systems.
Everything but the kill
Delegating target recognition to AI-enabled automatic target recognition (ATR) systems onboard unmanned platforms reduces human limitations and allows locking on to targets up to 2 km away.
Autonomous navigation makes drones strikes three to four times more likely to succeed.
Integrating into the civilian sensor mesh network
Ukrainian military authorities increasingly require all unmanned and reconnaissance systems to integrate with situational awareness and fire-correction platforms, aiming to establish a common operating picture in real time.
Two major challenges lie ahead for AI-enabled autonomy: extending these capabilities to ground, sea, and undersea platforms and enabling swarming for aerial systems.
You can look at this list from Mick Ryan’s Substack for additional reports on AI in Ukraine during the war.
Outside of Ukraine
There isn’t much to add for hard evidence of what is happening outside of Ukraine. Israel is fighting a much different war in Gaza, but the same general uses and limitations prevalent in Ukraine can be seen. The IDF primarily used AI for target selection and identification, drawing on existing intelligence profiles of tens of thousands of Palestinians.
Area denial (using munitions to deny the enemy access to territory) is being discussed more and more. In one context, using AI to make smarter land mines that could spare civilian deaths. Area denial will be a major part of the next naval war, certainly driven by AI. It should go without saying by now that AI is a major part of the PRC’s national security strategy as any other country.
Reshaping the Law of War
The general trends outlined above indicate three very general technical takeaways:
Autonomous weapons (whatever their level of autonomy) are cheaper and easier for anyone (state and non-state actors alike) to use in combat.
When used outside a pitched battle like Ukraine, these systems will rely on existing civilian-based sensory networks. (Think of how useful all the Ring cameras in your neighborhood will be to an infantry unit clearing the block).
Machine-on-machine war is preferable to all sides, but the ultimate goal is to attack your enemies' machines and kill human soldiers–the only true way to defeat a country militarily.
And four general takeaways in the world of International Law:
It is reasonable to question to what extent the Law of War exists at all right now, especially if there are no post-war repercussions for blatant war crimes.
Humanitarian-centric rules with high strategic value will be ignored if they depend on the survival of a state. The withdrawals from the Ottawa Anti-Personnel Mine Convention are evidence of this.
The traditional line between a privileged combatant and a nonprivileged combat will deteriorate. A soldier in uniform is a privileged combatant subject to certain protections. A spy or soldier dressed in the enemy’s uniform to conduct operations is not.
State actors that cannot reasonably count on post-war prosecutions must rely on other mechanisms to enforce generally accepted Laws of War.
Reprisals
A reprisal is a breach of IHL to bring the other side back into compliance with the law. There is a long history of reprisals, although they have generally fallen out of favor since WWII. A belligerent can’t just break any law either; executing POWs or civilians, for example, is prohibited. Interestingly, a few militarily significant states entered reservations on the restrictive part of these laws.
The best, although not great, current example would be the energy infrastructure attacks in the Ukraine war. Both sides claim that by attacking civilian energy networks, they are trying to bring the other side back into compliance with equal and proportional attacks on the other’s energy infrastructure. This isn’t a great example because Russia’s pretty clear lie about which side was in breach first is intertwined with these attacks, and it is debatable if either side would actually stop attacking short of a cease-fire. Nevertheless, it illustrates the point well enough—Ukraine, with increasingly limited post-war recourse to prosecute the aggressors for war crimes, must use alternative measures, effective or not, to force them to comply with the Law of War.
It is also possible to view the assassinations of Iranian scientists by Israel as in the spirit of reprisals. These assassinations serve two purposes. One is that they are militarily and strategically significant actions to prevent the enemy from obtaining useful technology, and they are arguably allowable under IHL.3 The other (the reprisal argument) is that Israel is using directed attacks to punish attempts at nuclear proliferation.
A non-kinetic example of assassinations is hostage diplomacy—using arrests and criminal indictments to bring an opposing side into compliance with existing treaties and trade practices or in retaliation for cyber attacks that fall outside traditional espionage norms.
Going out on a limb and laying out a possible future scenario: I don’t think it is implausible that countries could engage in reprisals for killing humans during combat at all. In machine-on-machine warfare during an age of declining birthrates, one could imagine a world where killing humans is a war crime. For a long time, officers and nobility were subject to separate rules of engagements, so as outlandish as this proposal might seem, it is not without historical precedent.
Lastly, drones are somewhat in an undefined gray area in terms of lawful combatants and, therefore, being subject to the Laws of War. Depending on whether you define drones as planes or missiles, they must bear distinguishable markings, something most do not do. Using commercial-off-the-shelf (COTS) equipment in war could, by definition, be perfidy. If everything is perfidy then nothing is perfidy. States could use reprisals to force drones to carry distinguishing markings, or they could use perfidy as an excuse to throw the whole book out the window and execute all drone operators as spies.
Not much of a high-note
What happens when an autonomous weapon does something we don’t want it to do? If reprisals are the norm and we lose control of autonomous weapons, the enemy will not necessarily understand or believe us. We could see hostages executed for actions we didn’t sanction in the first place.
The bright side, although admittedly not much of one, is that democracies generally follow the laws of war. Militaries also, out of concern for their own, are typically restrained from breaking norms when their troops may be subject to the same treatment. It is easy to view the sanctioned torturing that went on in the early 2000s as a counter-example, but that was an extremely controversial tactic that many in the military did not like out of fear of reprisals. It was also never widespread or became the norm.
Given the declining state of the international world order, it’s hard to see more treatise on the law of war signed, implemented, and adhered to in the short term. It is also somewhat beyond comprehension that there will be any arrests or prosecutions for any of the war crimes committed over the past few years.
This is why it is crucial to have an eye on the future and how a more assertive international legal regime can be implemented after this era of global disorder is over. Working through some of these problems now is important because the immediate future of International Humanitarian Law is going to be shaped during wartime and not, in any practical way, subject to the treatise. Instead, it will be shaped by what we are willing to have done to our service members and what we can enforce with reprisals. Post-war, in a democratic victory, we will only have our actions and customs to draw on when we reshape what we want the world to look like.
I intend to write on this in the near future, especially as it pertains to free speech. If anyone has any recommend reading or wants to collaborate on this, please reach out to me.
I reordered, chopped, and omitted some takeaways to flow better with this piece.
I am taking considerable leeway here if this is an “international armed conflict” and assuming these scientists were engaged in militarily significant research. These are debatable positions, but pretty legalese and wonky and play to my general point—reprisals, legal are not, are being used.