close
close
Local

IP24054 | Military AI Governance: Moving Beyond Autonomous Weapon Systems

The opinions of the authors are their own and do not represent the official position of the Institute of Defense and Strategic Studies, S. Rajaratnam School of International Studies, NTU. These comments may be reproduced with the prior permission of RSIS and with acknowledgment of the authors and RSIS. Please email the publisher IDSS Paper at [email protected].


AI governance in the military mainly focuses on autonomous weapon systems, while AI-based decision support systems have received less attention. Given that these will likely be used more widely in AI-based warfare, there is a need to broaden the focus beyond autonomous weapon systems in discussions of military AI governance.

COMMENT

At several major global artificial intelligence summits in recent months, discussions regarding military AI governance have tended to focus on autonomous weapon systems (AWS). AWS, commonly known as “killer robots,” have received the most attention thanks to an effective campaign by Human Rights Watch (HRW) to “stop the killer robots.” The evocative image of “killer robots,” which once mobilized discussions on lethal autonomous weapons systems (LAWS) at the United Nations, now distorts and narrows the debate on the military applications of AI.

Contrary to media portrayals, the use of AI in the military extends well beyond AWS. For example, Israel reportedly used an AI-based decision support system (ADSS) – the Lavender system – in the Gaza Strip. Observers of these military AI applications have generally failed to recognize the distinction between ADSS and AWS, thus treating the Lavender system as an AWS. However, the Lavender system does not autonomously select and apply force to targets; it just helps identify them.

Unlike AWS, which are weapon systems that, once activated, can identify, select, and engage targets without further intervention from a human operator, ADSS do not replace human decision-makers; Target selection and engagement decisions are still made by humans. Nevertheless, military applications of ADSS in Gaza and Ukraine raise doubts about compliance with international humanitarian law (IHL) and the ability to minimize risks to civilians. Given these doubts, policymakers should take steps to broaden current debates on military AI to encompass ADSS, raising awareness, understanding and behavioral norms regarding their military application, particularly in decisions on the use of force.

Unlike autonomous weapon systems (AWS), AI-based decision support systems (ADSS) do not replace human decision-makers. ADSS have reportedly been used on the battlefields of Gaza and Ukraine, from identifying targets for military operations to recommending the most effective targeting options. Image from Pixabay.

Campaign to stop killer robots

AWS was popularized by HRW in its 2012 report “Losing Humanity: The Case Against Killer Robots.”». The term “killer robots” has been used to draw media attention to serious ethical and legal concerns regarding AWS. In 2013, HRW launched the Stop Killer Robots campaign, which successfully mobilized the international community, and the first informal meeting of experts on LAWS was held in 2014 at the United Nations. Since then, AWS has been associated, even equated, with military AI, even though AWS may or may not integrate AI. The persistent reference to the AWS on issues such as the military application of ADSS, however, distorts the debate about the risks and challenges posed by the military use of ADSS in decisions on the use of force.

ADSS and military decision-making on the use of force

In the military context, ADSS can help decision-makers by collecting, combining and analyzing relevant data sources, such as drone surveillance images and telephone metadata, to identify people or objects, evaluate behavior patterns and make recommendations for military operations. Regarding the use of military force, ADSS can be used to inform decision makers about who or what a target is and when, where and how to strike it.

For example, the Lavender system reportedly used AI to support the IDF in its target selection process. Information on known Hamas and Palestinian Islamic Jihad (PIJ) operatives was used to train the system to identify characteristics associated with these operatives. The system then combined intelligence data, such as intercepted chat messages and social media data, to assess the likelihood that an individual was a member of Hamas or PIJ. The Israeli military also reportedly used another ADSS – Gospel – to identify buildings and structures used by militants.

Besides target selection, ADSS can also assist the military in the target engagement process. In the Ukraine-Russia conflict, ADSS were used to analyze large volumes of intelligence information, as well as radar and thermal images. The system then identified potential enemy positions, recommending the most effective targeting options.

ADSS and AWS – Conceptual and legal differences

ADSS represents a more diverse category of military AI applications than AWS, although some of the technologies used in the two systems may be similar. For example, ADSS with facial recognition and tracking software could be part of AWS; but if a weapon system can select and engage a target without human intervention, it would be classified as AWS.

The main concern with AWS is that the system itself triggers the entire target selection and engagement process. Simply put, humans do not choose (or know) the specific target, the precise time or location of the attack, or even the means and methods of attack. If an illegal massacre is committed by AWS, the question arises as to who is responsible for this behavior. As reflected in both the Rome Statute and the 2019 Guiding Principles from UN LAWS discussions, individual criminal responsibility only applies to humans and not machines. However, the challenge lies in identifying the person(s) responsible, which may include the manufacturer, programmer, military commander, or even the AWS operator. Therefore, reliance on AWS creates what is known as an “accountability gap,” in which conduct potentially amounting to a violation of IHL cannot be satisfactorily attributed to an individual; thus, no one is held responsible.

On the other hand, ADSSs are intended to support human decision-making; they do not replace human decision-makers. Humans are theoretically “in the know” when making the decision to select and apply force on targets. Therefore, with respect to ADSS, the accountability gap problem, a thorny issue in UN LAWS discussions, may not arise to the same extent as with AWS, as ADSS are designed to retain human decision-making.

However, ADSSs raise the question of what quality and level of human-machine interaction is required to ensure that their use complies with IHL obligations, including those required by the principles of distinction, proportionality and precaution. The Lavender system has been criticized for causing a high number of civilian casualties, as the system's human operators allegedly only served as a “buffer”. This example shows how decision-makers could end up relying on conclusions drawn by a machine, thereby making the human in the loop redundant.

Others argue that military applications of ADSS for the use of force can facilitate compliance with IHL. For example, ADSS can help human decision-makers determine the most appropriate means of attack by taking into account data about the target and environment, as well as assessing potential collateral damage.

The way forward for Singapore

Singapore is at the forefront of efforts related to military AI governance. She has actively participated in various discussions on military AI governance, including UN law discussions and the 2023 Responsible Military Artificial Intelligence (REAIM) Summit. In February 2024, Singapore hosted the first REAIM (Asia) regional consultations in partnership with the Netherlands and the Republic of Korea. In 2023, Singapore not only endorsed the REAIM Call to Action and the US-led “Political Declaration on the Responsible Military Use of AI and Autonomy”, but also joined the Convention on Certain Conventional Weapons, under which discussions on UN LAWS take place.

Singapore can use its unique role as a “trusted and important interlocutor” on various AI governance platforms, such as REAIM, to broaden discussions to include ADSS. Unlike AWS, which has various multilateral platforms to facilitate discussions and reach consensus, ADSS have not received the necessary level of attention. With Singapore's influence on these AI governance platforms, more attention and awareness could be generated among relevant stakeholders.

Second, policymakers should develop the necessary understanding of ADSS and its associated risks and challenges under IHL. They could do this through IHL training programs and multi-stakeholder discussions involving technology companies and academics to help them better understand the measures that may be necessary in the design and use of ADSS to ensure the respect for IHL. By undertaking such capacity building, Singapore could amplify its voice and leverage its influence in international forums to lead efforts to raise awareness, understanding and standards of behavior regarding the military application of ADSS, particularly in decisions on the use of force.

Mei Ching LIU is a research associate in the Military Transformations Program at the S. Rajaratnam School of International Studies.

Related Articles

Back to top button