The Myth of “Targeted Killing”. On responsibility of AI-powered targeting systems in the case of Lavender

Talk by Christian Heck and Rainer Rehak at the Conference AI and warfare – Investigating the technological and political domains of current conflicts, 16. – 18. October at the Alexander von Humboldt Institut für Internet und Gesellschaft (HIIG), Berlin

For registration, click the following link: https://www.hiig.de/events/ai-warfare/

Conference program: https://www.hiig.de/wp-content/uploads/2024/08/EXTERN-Conference-Programme-AI-Warfare.pdf

Abstract

The topic of AI in military technologies and the relationship between humans and machines has been a theoretical topic in philosophy, social science and critical algorithm studies for decades. However, in recent years weapon systems with AI components were actually developed and used in conflicts around the world.1 The deployment of such AI-based systems in real combat situations allows for adjusting former mainly theoretical concepts as well as for historical analyzes of the technological developments leading up to the concrete systems in question. In this work, our research inquiry deals with the AI-powered target selection system Lavender, which has been used by the Israeli military in the currently ongoing war in Gaza. We focus on questions of responsibility and historic continuities. Empirical basis for our research is a piece of investigative journalism with the title “‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza” by Yuval Abraham published in a cooperation of the well-known Israel-based outlets “+972” and “Local Call”2. In it, Abraham exposed an AI-based command, control and decision support system which is used by the IDF for identifying militants of Hamas and Islamic Jihad to create a kill list. Due to AI-typical heuristic nature of the results, around 10% of the targets suggested were civilians, which was deemed acceptable by IDF decision makers. According to witnesses, the homes of the targets are then attacked at night, when the whole family is usually present. In June 2024, the international journalists’ network forbidden stories revealed in its study “The Gaza Project” that journalists and media professionals were also placed on such lists to be killed3.

This targeting of civilians or e.g. classifying them as “harmless neighbors”4 is part of the historicity of AI-based targeted killings as to be read from the early “signature strikes”5 in Waziristan (Pakistan) or Afghanistan. It is also inscribed in almost all hegemonic data processing6 methods of this century. From big data, to social media, to Ad-Targeting systems, whose logic is also used to kill people in war7.

From transparent citizens through AI-based surveillance of everyday life, the path leads us out from the findings of the global war on terror, into transparent battlefields that not only take over civilian infrastructures and public space, but also threaten the status of the “innocent civilian” in continuous disregard of the Geneva Convention8.

But using people as a shield does not take away their civilian character. Using hospitals and schools as military bases does not give the military the right to take away their protection under international law. This protective character however is not part of the targeting system Lavender. Nevertheless, these redefinitions, including the acceptance of civilian casualties in a three-digit range, are inscribed in decision support systems for Multi Domain Operations9 in which soldiers and commanders can frequently see through vision machines and recognize by means of complex information processing systems. Relevant questions to inquire are: How to approach the problem, if the protection of the civilian population can no longer be guaranteed due to the falling inhibition threshold when using such systems? When a clear separation of human and machine agency and decision-making can no longer be established? When operational errors in identification software products identify schools as military bases or civilians as Hamas militants? How can these internal functions and behaviors of AI implementations explained decision makers, to the world public or the UN Security Council?

In our work, we suggest and argue to use the imaginairy of a “dirty bomb” for AI-based targeting systems. Using this concept nicely captures the fuzzy heuristic characteristics and allows for asking concrete questions of responsibility within the military hierarchy beyond only looking at the individuals on the battlefield. The questions above and possible approaches to extracting partial responsibilities that are inscribed, or emerged from technical, multidimensional spaces and infrastructures, are the focus of our work. The insights presented here are based on a joint analyzis by experts of the Forum Computer Professionals for Peace and Societal Responsibility (FIfF e. V.), together with Information Center Militarization (IMI e. V.) and the Working Group Against Armed Drones called for the outlawing of the practices of “targeted killing” with AI-supported systems as a war crime10 and aims to advance the academic and public debate on the relationship between humans and machines in the context of AI-powered weaponry in armed conflicts.

1Jutta Weber, “Autonomous Drone Swarms and the Contested Imaginaries of Artificial Intelligence,” Digital War 5, no. 1 (January 1, 2024): 146–49, https://doi.org/10.1057/s42984-023-00076-7.

2Abraham Yuval, “‘Lavender’: The AI Machine Directing Israel’s Bombing Spree in Gaza,” +972 Magazine, April 3, 2024, https://www.972mag.com/lavender-ai-israeli-army-gaza/.

3“Gaza Project,” Forbidden Stories, accessed July 5, 2024, https://forbiddenstories.org/projects_posts/gaza-project/.

4Jobst Paul, “Krieg und Ethik – Philosophie als Waffe im Gazakrieg.”, Unpublished manuscript, May 2024.

5Signature strikes are killings of suspected militants whose identity is not fully known. These killings are based on a life pattern analysis, i.e. on findings about the behaviour of individuals that indicate that they are militants. This procedure was first authorised by former US President Bush in Pakistan in 2008 and was subsequently also permitted in Afghanistan, Yemen and Somalia. The Pentagon openly admitted that signature strikes were often used to kill unknown individuals simply because of their “suspicious behaviour”. Cf. Cora Currier, „The Kill Chain: The Lethal Bureaucracy behind Obama’s Drone War“, The Intercept, 15 Oct 2015, https://theintercept.com/drone-papers/the-kill-chain/.

6Following Gramsci, hegemony refers to a type of rule that is essentially based on the ability to define and assert one’s own interests as the general interests of society.

7Meredith Whittaker, “The Prizewinner’s Speech,” May 15, 2024, https://www.helmut-schmidt.de/en/news-1/detail/the-prizewinners-speech.

8“Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June 1977.,” accessed July 5, 2024, https://ihl-databases.icrc.org/en/ihl-treaties/api-1977/article-50.

9The initial concept for Multi-Domain Operations, defined by the NATO International Military Staff, 2022 is: “Orchestration of military activities, across all domains and environments, synchronized with non-military activities, to enable the Alliance to create converging effects at the speed of relevance.”

10“Senkung der Hemmschwelle durch den Einsatz von Künstlicher Intelligenz – Lavender und Co. sind als Kriegsverbrechen einzustufen,” FIfF e.V., April 29, 2024, https://blog.fiff.de/warnung-senkung-der-hemmschwelle-durch-kuenstliche-intelligenz/. English version: https://berlinergazette.de/against-the-rationalization-of-war/.