Moral Questions in a War Sans Humanity

Moral Questions in a War Sans Humanity

By Samuel Hagood, University of Chicago


Today, many machines complete tasks that used to require human intelligence. These machines, known as artificial intelligence or AI, streamline our lives by finding the quickest route through traffic and the best results for our searches. AI can recognize our words and our faces. ChatGPT, an AI chatbot, draws on the collective knowledge of the Internet to present expertise in seconds. Yet artificial intelligence can do as much harm as it can good. This becomes apparent at the crossroads of AI and military innovation. A new class of weapons systems known as fully autonomous weapons systems, use artificial intelligence to independently select and engage targets without human control or supervision.1 These weapons are attractive to modern militaries because they aren’t restrained by human limits like sleep. They cost less to operate than manned systems and keep troops off dangerous battlefields. Yet there exist moral questions that should be considered before inhuman, robotic agents begin to take the lives of humans. 

First, many general codes of conduct acknowledge that people should take responsibility for their mistakes and the lives they end. They must respect their opponents enough to accept that fact; otherwise, enemies become animals to be exterminated. Autonomous weapons present sobering questions when considered in light of this duty. Who is put on trial when an autonomous weapon commits a war crime? It seems reasonable to first blame the programmers of the autonomous weapon for an accident. Yet, if they created the weapon to be fully autonomous, then it would have been designed to learn and make its own decisions. It doesn’t make sense to criminalize innovators for delivering on a customer’s request, when militaries should know that autonomous weapons are altogether capable of making mistakes. Yet blaming commanding officers for accidental deaths and crime also fails to address the issue. The crimes of a fully autonomous agent are not the crimes of the general, any more than the crimes of the private are the crimes of the general. Both subordinates can make mistakes. The weapon committed the crime, and fighting a just war means punishing those who fight unjustly. We can’t punish autonomous weapons the same way we punish humans since machines don’t understand what it means to be punished. Since they cannot suffer as we do, no punishment will satisfy a victim’s loved ones, or our sense of justice.2 Autonomous weapons, therefore, create a black hole of moral responsibility that, if left unchecked, will leave deaths unexplained and families crying for forgotten sons and daughters. 

Paul Scharre, a retired Army Ranger-turned-researcher dedicated to the field of autonomous weapons, lived through many nerve-wracking moments during his time of service. In a 2018 speech, Scharre recalled a time when his squad settled onto a mountaintop near the Afghani-Pakistan border to watch for Taliban border crossings. A Taliban group sent a young girl from a nearby village up the mountain to scout Scharre's position and report back. Scharre watched through his sniper rifle’s scope as the girl approached and eventually turned away; the soldiers simply waited out her search. What never occurred to them was to kill the child who, according to the precepts of just war, was a combatant gathering intelligence. She was an enemy scout. At best, a robot in Scharre’s place would have had some algorithm ensuring that it followed established standards of war. Whether it intended to uphold justice or not, a small girl, forced up a mountainside by terrorists, would have died that day.3  

Consider, then, the rise of autonomous weapons in a world increasingly divided over right and wrong. These weapons enter the arsenals of the world as war rages between Russia and Ukraine, as an Israeli invasion cripples Gaza, and as China looms over Taiwan and the Western Pacific at large. Although even unsubstantiated reports of the use of fully autonomous weapons remain rare, many advanced armies claim the technological capability. Most commonly, certain missiles and drones possess multiple modes of operation, with increasing levels of autonomy possible in the eventualities of communications-denied environments or situations requiring superhuman reaction times. 

Take for example the Kargu-2 drone, manufactured by Turkish defense company STM. As loitering munitions equipped with facial recognition software and capable of attacking targets without data connectivity to human controllers, Kargu-2s are truly fully autonomous weapons systems. According to a 2021 report addressed to the UN Security Council, in 2020 the Government of National Accord in Libya utilized Kargu-2 drones to pursue soldiers loyal to warlord General Khalifa Haftar away from the capital of Tripoli. The UN report writes that the “retreating HAF were subsequently hunted down and remotely engaged by…the lethal autonomous weapons systems”, which utilized “a true ‘fire, forget and find’ capability.”4 This report heavily implies that a globally significant event took place in Libya: the first humans lay dead due to the decisions of a machine. Unfortunately, the Kargu-2 is far from unique in design. Israeli defense company Israeli Aerospace Industries produces a land vehicle known as the REX MkII, which boasts two machine guns and is typically controlled by a human operator on a tablet. However, the deputy head of IAI’s autonomous systems division, Rani Avni, stated that “it is possible to make the weapon itself also autonomous,” in addition to the autonomous navigation and surveillance systems already present in the four-wheel-drive REX.5 The United Kingdom has requested the vehicle for experimental programs,6 and state-owned IAI retains an interest in using the REX for patrol of potentially dangerous areas, such as the Gaza strip. Kalashnikov, arms manufacturer of the notorious AK-47, now produces the Lancet loitering munition, a weapon that Kalashnikov claims is capable of “autonomously finding and hitting a target.”7 The U.S., through a plethora of contracts with top defense firms, also boasts many prototypes and autonomy-capable weapon systems and platforms, including the Long Range Anti-Ship Missile (LRASM) and the X-47B UCAS. China, Canada, and various European states all pursue contemporary technologies through partnerships with defense technology firms.8 

Given the potentially disastrous advent of fully autonomous weapons, prudent counsel might advise these countries to come together and discuss reasonable limits for the development of these weapons, though the evolving situation will likely resemble the continual effort that is nuclear disarmament. The UN’s Convention on Certain Conventional Weapons (CCW) contains restrictions on the use of certain weapons considered indiscriminately harmful to non-combatants (cluster munitions, anti-personnel mines, etc.). A 2013 meeting created what is now a Group of Governmental Experts convening yearly to discuss possible additions to the Convention concerning autonomous weapons; however, the CCW is a consensus-based organization, and the U.S. and China, among others, have been reluctant to commit to any changes.9 

A certain form of strategic ambiguity has been a mainstay in the discussions taking place over the last decade, as the U.S.–China power competition, and regional tensions in the Middle East and Eastern Europe have left financially capable countries reluctant to leave a potentially revolutionary technology on the table. Israel, the UK, Russia, and South Korea join the United States and China in investing considerable resources to research and develop autonomous weapons for future conflict, while simultaneously maintaining vague stances on which systems go too far.10 The Department of Defense’s current policy on autonomous weapons states, “Autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”11 Rational as that operating policy may sound, the wording allows the U.S. military-industrial complex to innovate ceaselessly, inching ever closer to the rather blurry line delineated by the phrase “appropriate levels of human judgment.” China began a unique diplomatic approach to the issue in April 2018, when its delegation to the CCW announced a “desire to negotiate and conclude” a new protocol banning the use of fully autonomous weapons. However, the Chinese stressed that the ban was limited to use only, not to development, and the same day saw China’s air force release details on upcoming exercises to evaluate the use of intelligent-swarm drones in combat.12 

Since the Group of Governmental Experts was formed in 2013, 40 countries have joined human rights groups and other NGOs in voicing support for a ban on autonomous weapons systems.13 Their proclamations can be seen as sincere counsel from members of a concerned world; they can also be interpreted as calculated geopolitical ploys to reduce the disparity in military innovation between regional powers and their lesser competitors. The truth lies somewhere between idealism and realism. Each country may or may not recognize the threat that autonomous weapons pose to the entire world, but they all begin to recognize the overwhelming security concern that a hostile neighbor boasting these weapons will pose to them individually. 

The possibility of the use of autonomous weapons in emerging conflicts depends on three factors: the ability of countries to acquire or develop these weapons; the necessity for the use of autonomous weapons in the place of more conventional arms; and any reservations that countries may have about the longer-term consequences of utilizing artificial intelligence in warfare. 

First, with the private sector producing the majority of these weapons (China’s state-owned defense firms being an exception to the rule), autonomous weapons are increasingly accessible, not only to those who can afford them, but to those who identify with certain blocs. While nations like the U.S. and Israel benefit extensively from established and innovative defense industries within their borders, those defense industries don’t define themselves by borders. An American aid package to Ukraine provided the war-torn country with as many as 580 Phoenix Ghosts,14 unmanned loitering munitions sourced from California defense firm Aevex Aerospace.15 In 2020, Azerbaijani officials confirmed the use of Israeli-made Harpy drones in an Azerbaijani-Armenian conflict;16 Harops are autonomous weapons produced by the state-owned firm Israeli Aerospace Industries.17 Export controls of autonomous weapons will represent a major factor in countries’ attempts to procure them, emphasizing the importance of diplomatic relations and shared goals between haves and have-nots. Interest in autonomous weapons could create new alignments between nations, but it is far more likely to reinforce existing ones. A country’s willingness to deploy autonomous weapons in battle, therefore, may not only be affected by their stance on the issue, but by the views of the countries supplying them with the weapons in the first place. 

Secondly, autonomous weapons are more likely to be acquired, developed, and/or used to address situations where nations perceive a need for the weapons’ unique strengths. For instance, almost any combatant can benefit from being able to make informed decisions more rapidly than his opponent, and autonomous weapons typically operate at superhuman speeds. However, errors in their target-evaluation and decision-making processes could produce mistakes that occur too quickly for humans to nullify the results. Certain autonomous weapons (cheaper drones with makeshift explosives attached to them) can also provide militaries with alternatives that can replace humans or keep soldiers out of harm's way. Therefore, the prospect of autonomous weapons will create a natural tension between the desires that governments have to limit indiscriminate harm to civilians and the care that they take to protect their soldiers from harm. 

Finally, to the degree that world governments recognize the consequences of autonomous-weapon proliferation, their development use of the weapons will depend upon the security situation that they find themselves in. A country that finds itself on the verge of destruction or, more likely, a government that finds itself on the verge of expulsion or collapse, may see autonomous weapons as a measure of last resort, no matter how ill-advised their use would be. This scenario illustrates a far more fundamental fear that worries many experts in the field.18 If a government of any quality and condition were to hand over control of more expansive or influential systems to artificial intelligence, the ramifications of glitches, biased training data, or unforeseen conditions could be catastrophic. Autocratic regimes, because they tend to distrust their populations, will be especially prone to delegating crucial functions to machines that are more trustworthy in allegiance, if not in action.19 Monitoring contested waters, identifying rebels on crowded streets, automating nuclear responses in the event of a first strike: the world of national security proves to be rife with potential applications of AI, and the physical world remains rife with confusing data that could confuse a weapon trusted to act alone. Individual nations’ use of autonomous weapons will turn, among other considerations, on the varying levels of trust that those nations place in certain systems, be they human or artificial.  

As ever, the world has a great deal of reckoning to do with the implications of its own creation. Artificial intelligence, generally speaking, may well revolutionize every aspect of our lives for the better; likewise, autonomy in warfare may allow nations to protect their citizens more effectively than ever before. However, countries like the United States, Israel, and China should do their due diligence to understand the plight that their innovation puts other nations and their citizens in. In a world where the American army lets robots fight its battles, American soldiers may never see the lives that they take, while human beings contending the United States will go through a calculated, mechanized hell in pursuit of victory, in likelihood of defeat. The threat of war with the United States may become more intimidating through the use of AI in warfare, but the beginning of that war will introduce slaughter without accountability and sorrow without remorse. 

The United States, then, should work to negotiate practical legislation on autonomous weapons while maintaining a position of strength in the arms race to develop them. International discussions focused on issues intrinsic to the weapons may be fruitful, especially if emphasis is accorded to the instability they will bring to standoffs and contested environments. However, the laborious, oft-thwarted efforts of nuclear non-proliferation will likely be the blueprint for the efforts of those who preach caution in an increasingly dangerous world. Hope should remain for a future where humans do not fall prey to machines, but realism should dictate the methods by which well-meaning actors achieve that future for posterity. 

Bibliography: 

[1] Sayler, Kelley M. "Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems." Congressional Research Service (CRS) Reports and Issue Briefs, Congressional Research Service, 2021, p. NA. Gale Academic OneFile, link.gale.com/apps/doc/A688554328/AONE?u=24897062&sid=bookmark-AONE&xid=2e6d0fc8. Accessed 28 Sept. 2022. 

[2] Sparrow, Robert. “Killer Robots.” Journal of Applied Philosophy, vol. 24, no. 1, 2007, pp. 62–77. JSTOR, http://www.jstor.org/stable/24355087. Accessed 9 Feb. 2023. 

[3] “Clip of the Month: Ethical Implications of Autonomous Weapons, with Paul Scharre.” Youtube, uploaded by Carnegie Council for Ethics in International Affairs, 29 May 2018, https://www.youtube.com/watch?v=spOSyIjVNyk  

link.gale.com/apps/doc/A564604828/AONE?u=24897062&sid=bookmark-AONE&xid=fc280b96. Accessed 12 Oct. 2022. 

[4] Choudhury, Lipika Majumdar Roy. “Final Report of the Panel of Experts on Libya Established Pursuant to Security Council Resolution 1973 (2011).” UN Official Document System, Official Document System of the United Nations, 8 Mar. 2021, Accessed 4 Jan. 2024.  

[5] Bernstein, Alon, and Jack Jeffery. “Israeli Firm Unveils Armed Robot to Patrol Volatile Borders.” AP News, AP News, 13 Sept. 2021, apnews.com/article/technology-middle-east-business-israel-628f878f704b7c082ec2ebc9e9441173.  

[6] Judson, Jen. “IAI Debuts New Hybrid Ground Robot Joining the UK Army Inventory.” Defense News, Defense News, 19 Aug. 2022, www.defensenews.com/digital-show-dailies/dsei/2021/09/13/iai-debuts-new-hybrid-ground-robot-joining-the-uk-army-inventory/.  

[7] “Kalashnikov Presented Precision UAV Weapon System - Zala Lancet.” Kalashnikov, Kalashnikov Concern, 24 June 2019, en.kalashnikovgroup.ru/media/bespilotnye-letatelnye-apparaty/kalashnikov-predstavil-vysokotochnyy-udarnyy-bespilotnyy-kompleks-zala-lantset.  

[8] Law, Marcus. “Top 10 Military Technology Companies Putting AI into Action.” Technology Magazine, BizClik Media Ltd., 7 Mar. 2023, technologymagazine.com/top10/top-10-military-technology-companies-putting-AI-into-action#.  

[9] “GGE on Lethal Autonomous Weapons Systems.” Digital Watch Observatory, Geneva Internet Platform, 2 Nov. 2023, dig.watch/processes/gge-laws.  

[10] Wareham, Mary. “Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control.” Human Rights Watch, Human Rights Watch, 28 Mar. 2023, www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomous-weapons-and.   

[11] United States, Department of Defense, Office of the Under Secretary of Defense for Policy. Autonomy in Weapon Systems: DOD Directive 3000.09, United States Department of Defense, 2023.  

[12] Kania, Elsa B. “China’s Strategic Ambiguity and Shifting Approach to Lethal Autonomous Weapons Systems.” Center for a New American Security, Center for A New American Security, 18 Apr. 2018, www.cnas.org/publications/commentary/chinas-strategic-ambiguity-and-shifting-approach-to-lethal-autonomous-weapons-systems-1.  

[13] “Killer Robots: Military Powers Stymie Ban.” Human Rights Watch, Human Rights Watch, 20 Dec. 2021, www.hrw.org/news/2021/12/19/killer-robots-military-powers-stymie-ban.  

[14] Lopez, Todd. “More Himars, Phoenix Ghost Drones Bound for Ukraine.” U.S. Department of Defense, U.S. Department of Defense, 2 July 2022, www.defense.gov/News/News-Stories/Article/article/3103655/more-himars-phoenix-ghost-drones-bound-for-ukraine/.  

[15] Insinna, Valerie. “Meet ‘Phoenix Ghost,’ the US Air Force’s New Drone Perfect for Ukraine’s War with Russia.” Breaking Defense, Breaking Media, 21 Apr. 2022, breakingdefense.com/2022/04/meet-phoenix-ghost-the-us-air-forces-new-drone-designed-for-ukraines-war-with-russia/.  

 [16] “Azerbaijan Praises ‘Very Effective’ Israeli Drones in Fighting with Armenia.” The Times of Israel, The Times of Israel, 30 Sept. 2020, www.timesofisrael.com/liveblog_entry/azerbaijan-praises-very-effective-israeli-drones-in-fighting-with-armenia/

 [17] “HAROP Loitering Munition System.” IAI, Israeli Aerospace Industries, 2005, www.iai.co.il/p/harop

 [18] Michel, Arthur Holland. “Inside the Messy Ethics of Making War with Machines.” MIT Technology Review, MIT Technology Review, 18 Oct. 2023, www.technologyreview.com/2023/08/16/1077386/war-machines/.  

 [19] Farrell, Henry, et al. “Spirals of Delusion: How AI Distorts Decision-Making and Makes Dictators More Dangerous.” Foreign Affairs, Council on Foreign Relations, 10 Jan. 2023, www.foreignaffairs.com/world/spirals-delusion-artificial-intelligence-decision-making