The Challenges of AI to International Law: The Case of Gaza

By Dhami Mohan Singh

Dhami Mohan Singh is a Master Student at Ritsumeikan University, Kyoto.

Technological breakthroughs have revolutionized almost every sphere of human life, and military technology is no exception. Today, both International Armed Conflicts (IACs) and Non-International Armed Conflicts (NIACs) are becoming more complex with the development of sophisticated and automated weaponry. Particularly the use of Artificial Intelligence (AI) in military warfare is a frightening phenomenon that major powers have not shied away from both developing and deploying. For instance, the US military’s Project Maven has been supplying AI targeting in Ukraine and the Middle East. China has been developing technologies to help analyze data, select potential targets, and expedite decision-making[i]. However, recently even more potent systems have come to the fore and are challenging global warfare and Human Rights norms. 

Following the October 7 attack by Hamas in Israel, killing 1200 and taking 251 hostages, Israel has been continuously carrying out indiscriminate attacks in Gaza killing more than 37,296 people (more than 15000 of them are children), more than 10,000 still missing, and 85,197 severely injured[ii]. The Israeli military has been operating such attacks with the help of Artificial Intelligence (AI) tools such as automated drones loaded with unguided “dumb bombs”. Although Israel had previously used a similar AI system called Habsora[iii] to carry out attacks in Gaza, this is the first time when the IDF has used AI in military attacks widely and indiscriminately to cause unprecedented civilian deaths in Gaza. In recent attacks, Israel used a technology called Lavender, an AI-based database system to identify more than 37,000 targets based on their links with Hamas. This is linked to another AI-based decision support system known as Gospel, which recommends buildings and structures as targets other than individuals. The military officers subsequently authorizing the actual attacks were found to have spent less time, limiting the human role in the authorizing targets to a mere rubber stamp[iv]. Besides, top IDF military commanders keep pushing subordinate officers to find more targets each time[v]. And this compels them to rely more on AI-assisted targeting.  

This presents a serious challenge to human rights law and humanitarian law alike, as it removes human agency from the act of killing and thereby obscures accountability. This paper analyzes the challenges to international human rights law (IHRL) and humanitarian law (IHL) posed by Israel’s use of AI for attacks in Gaza and makes recommendations for the development of human-centered approaches to the regulation of AI in international law. 

Challenges To IHL And IHRL In Gaza

Humanitarian law is the branch of international law that deals with the rules of warfare. A basic premise in the Geneva Convention of 1949, states that the conflicting parties should clearly distinguish between civilian objects and military objectives. Civilians and non-combatants must be protected from harm or death. Civilian objects such as homes, hospitals, schools, and individuals, especially children, women, and the disabled must be distinguished from combatants and military objects. However, Israel Defense Forces (IDF) has used the Lavender system which prepares lists of human targets based on their links with Hamas fighters such as identical names, previous communications, frequency of visits to certain places, or just because of identical facial characteristics. It is evident from available sources that the IDF does not spend time and effort verifying target lists, as one military officer revealed that it took only less than 20 seconds to verify the targets prepared by the AI system i.e. whether the target were male or female.

This is also an infringement of humanitarian law, as the Geneva Conventions demand proportionality as another key principle for the conduct of war. Belligerents must assess the potential damage of their actions on civilians and the infrastructure, and keep their use of violence proportional to the military objective. However, the Gaza attack by Israel using AI is highly disproportionate. According to +972 magazine, the Israeli military authorized the decision to kill 15 to 20 civilians for one junior Hamas operative and up to 100 civilians for a single commander that the Lavender marked[vi]. Such violations directly led to an increase in civilian deaths and unprecedented damage to civilian infrastructure. 

Besides, as part of customary international humanitarian law, the conflicting parties should not attack the opponent without precautions, especially when the deaths of civilians or protected persons are highly likely. Sufficient time and information must be provided to evacuate non-combatants from conflict zones. Measures such as alarming and alerting people through various means must be done in due time. This, however, has been violated countless times in the cases of IDF attacks in Gaza and the West Bank. AI targeting systems such as ‘Where is daddy?’ recommend the targets when they are home, especially at night with their family members to cause maximum civilian casualties and collateral damage. This is an utter violation of the precautionary rule under humanitarian law and customary international law. 

Also, to Human Rights Law- which exists separately from humanitarian law- the development of modern weapons such as armed drones and militarized robots is an unprecedented challenge. As these technologies can not sufficiently distinguish between civilians and combatants, they cause unnecessary suffering to vulnerable people[vii]. This violates IHL and IHRL, and current laws are not framed to address such challenges. An AI-based program such as Lavender uses photographs and phone contacts of people in Gaza to analyze the likelihood of their being militant, prepares a database of possible targets, and then recommends the attack[viii]. Using civilians’ personal information without their consent or approval violates the right to privacy and it is more worrying in Gaza where one state is using the personal information of citizens of another state. Because Palestinians are not under Israeli jurisdiction, it is unlawful to use the private data of the citizens of another state.

Because AI-powered drones and automated weaponry have been carrying out indiscriminate attacks, everyone in Gaza is under threat, especially women, children, and disabled people are not able to enjoy human rights. According to Talbot[ix], the ‘Right to privacy and freedom of assembly’ of Palestinian people has also been restricted since Israel started using Facial Recognition Technology (FRT) in 2019. Fundamental rights of women are violated such as by depriving them of safe childbirth, sufficient and nutritious food, and the like. Women and children have been living under the constant fear of losing their loved ones. Violence has become part of their daily lives; fear of being attacked and fear of losing their families have become routine phenomena in their community. Everyone is under Israeli surveillance and their lives are remotely controlled by the Israeli military. There is no ray of hope for them to be protected from a human rights perspective. 

Dimensions of the Challenges

As shown by various sources, Israel has been widely using AI to target and conduct attacks at unprecedented and indiscriminate levels in Gaza. According to [x]Matafta & Leufer, IDF has been found using three types of AI tools: Lethal Autonomous Weapon Systems (LAWS) and Semi-Autonomous Weapons Systems (Semi-LAWS), Facial recognition systems and biometric surveillance, and automated target generation systems. These can be analyzed in three dimensions; legal, moral, and accuracy.

First, from a legal perspective- although no clear, globally accepted, and legally binding rules exist regarding the use of AI in the military, such indiscriminate killings of civilians using AI are not acceptable under customary IHL. Besides, LAWS and Semi-LAWS both are condemned by some important international figures such as the UN Secretary-General as ‘politically unacceptable and morally repugnant’[xi].

Second, on the moral ground; the use of automated weapons against civilians goes contrary to the morality of the war[xii], especially when Israeli Prime Minister Benjamin Netanyahu calls his military ‘the most moral military’ in the world. Vowing something publicly but not keeping up the promise in their action might risk the leader losing moral ground before the international community.   

Third, from the accuracy perspective; as reported by +972 magazine, the AI warfare systems if developed and trained in flawed instructions might function adversely. For instance, in the case of Lavender non-combatant employees of the Hamas government including police and civil defense staff, militants’ relatives, and even individuals with identical names to Hamas militants are often considered legitimate targets. Where even the highly trained and regularly updated systems such as Google Maps can drive someone into a creek[xiii], – in an incident on September 30, 2022, in North Carolina in the USA, Philip Paxon drove into a river while crossing a bridge that had collapsed nine years ago but was not updated in Google Maps System. Such errors in AI warfare systems can have devastating consequences for human lives. And there are no strong and reliable measures to test the accuracy of such hidden errors.

Current Initiatives and Way Forward

The discourse at the UN has been ongoing over the use of ‘lethal autonomous weapons systems’. The UN General Assembly has recently voted in favor of a new draft resolution that algorithms must not be in full charge of decisions on warfare and attacks. The International Committee of Red Cross (ICRC) has already requested countries to limit the autonomy of weapon systems[xiv]. Similarly, the US has also made a declaration on the responsible use of AI in the military which has been endorsed by 50 other nations. The Netherlands and the Republic of Korea have recently co-hosted a summit on ‘responsible use of military AI’. Significant efforts have been made by the EU parliament by promulgating the Artificial Intelligence Act in 2024 that seeks to regulate the development and use of AI among EU nations. 

Although there are some efforts made either unilaterally or jointly by some nations, there has not been any effective measure taken to regulate the use of AI in war particularly in Israel’s attack on Gaza. Additionally, Israel’s intention in Gaza and continuous support from the US are further fueling the complexity of technological misuse in Gaza. Shereshevsky[xv] mentions rapid advancement and lack of accurate information on military technology as two major challenges in regulating the use of AI in warfare.  Therefore, to ensure the lawful use of AI in the military, Murray[xvi]emphasizes taking due consideration in designing, developing, and monitoring the use of AI during warfare.

The current use of AI in the Gaza attack by Israel has been continuously violating the principles of IHL and IHRL which must be stopped as soon as possible through a collaborative approach. To this end, some amendments can be made to the Geneva Convention of 1949 and its additional protocols. If needed some separate rules on regulating the AI can be promulgated through regional or global initiatives. Actors like the US that provide advanced weapons and AI technologies such as Google Maps, can regulate their supplies of those technologies. Organizations such as ICC, UN OHCHR, and ICRC can actively advocate, monitor, and make all the involved actors responsible and accountable for the violation of IHL and IHRL. The use of AI-based decision support systems such as Lavender, and Where is Daddy should also be either regulated or banned to be used against civilians in Gaza.  

Although AI systems operate automatically, we cannot undermine the human role. The use of AI by completely replacing the human role in authorizing the decisions can overlook cognition, planning, and reasoning.  AI also has its ‘systemic limitations’[xvii] so overreliance on machines for carrying out attacks on humans, especially civilians can be a threat to the extinction of the entire human race. Such an act of targeting and killing civilians without assessing the reality is also an act of dehumanizing the people[xviii]. Such can lead to a vicious cycle of hatred and revenge among people and nations. The use of AI in carrying out attacks suppresses human rights and shatters humanity[xix]. Therefore, the ultimate responsibility to make rational and contextual decisions should always rest with humans, and it should be sanctioned only with proper investigation, assessment, and planning. The use of AI should be made for the relief, rescue, and construction of conflict-ridden regions, not for violating the human rights of women, children, and unarmed civilians. This can be possible only by working collaboratively with both states and non-state actors – beyond the border and without any further delay. 

Conclusion

The current Israeli attack in Gaza using the AI system has created new forms of challenges to regulate the warfare both from IHL and IHRL perspectives. Israel’s use of AI-based decision support systems such as ‘Lavender’ and ‘Where Is Daddy’ have been found to have replaced the human role in decision-making. Besides using AI in finding and attacking targets, especially during the night, using personal data without consent, and targeting civilian infrastructures using dumb bombs and automated drones are all beyond the control of current international humanitarian law as well as international human rights law.  

Although many actors have started forming laws to regulate AI in warfare such as the EU’s recent effort to make AI act, and initiatives by the US and China are worth noting – a more concrete legal and institutional framework is urgently needed. Organizations such as ICRC can play a leading role in the case of Gaza. Parties to the Geneva Convention and Additional Protocols can deliberate on including the provision of AI use and regulations in respective texts. Necessary institutions can be established within ICRC to regulate the production and use of AI in warfare. Legal and institutional frameworks can also be mandated to mainstream countries like Israel within the jurisdiction of the Geneva Convention and its additional protocols. This can only be possible with continuous pressure from the international community and commitment from major powers such as the US. And, if needed, an Additional Protocol IV can be developed particularly to regulate AI use in warfare and conflicts. 

Although it is almost impossible to completely avoid the use of AI in warfare because of the widespread use of technology in human lives, it is vital to regulate the use of AI in warfare by updating legal texts such as the Geneva Convention and their Additional Protocols. Besides, necessary changes in existing institutions such as ICRC can be made to address the challenges of regulating AI in conflicts. This could be possible only if countries such as the USA, and Israel agree to limit and regulate the use of AI in conducting hostilities against civilians and civilian infrastructures in current hostilities in Gaza – which is indispensable to protect and promote both IHL and IHRL equally – and this is the ‘call of the day’ and ‘cry of the humanity’. 

(Revised: August 2, 2024)


[i] Karner, N. (2024, April 11). Israel accused of using AI to target thousands in Gaza, as killer algorithms outpace international law. THE CONVERSATION. Accessed June 22, 2024. https://theconversation.com/israel-accused-of-using-ai-to-target-thousands-in-gaza-as-killer-algorithms-outpace-international-law-227453

[ii] Aljazeera. (2024, June 18). Israel-Gaza war in maps and charts: Live Tracker. Accessed June 22, 2024. https://www.aljazeera.com/news/longform/2023/10/9/israel-hamas-war-in-maps-and-charts-live-tracker

[iii] Karner, N. (2024, April 11). Israel accused of using AI to target thousands in Gaza, as killer algorithms outpace international law. THE CONVERSATION. Accessed June 22, 2024. https://theconversation.com/israel-accused-of-using-ai-to-target-thousands-in-gaza-as-killer-algorithms-outpace-international-law-227453

[iv] John, T. (2024, April 3). Israel is using artificial intelligence to help pick bombing targets in Gaza, report says. CNN. Accessed June 22, 2024.  https://edition.cnn.com/2024/04/03/middleeast/israel-gaza-artificial-intelligence-bombing-intl/index.html

[v] McKernan, B. & Davies, H. (2024, April 3). ‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets. The Guardian. Accessed June 22, 2024. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

[vi] Aljazeera. (2024, April 4). ‘AI Assisted Genocide’: Israel reportedly used database for Gaza kill lists. Accessed June 22, 2024. https://www.aljazeera.com/news/2024/4/4/ai-assisted-genocide-israel-reportedly-used-database-for-gaza-kill-lists  

[vii] Niyitunga, E. B. (2022). Armed drones and international humanitarian law. Digital Policy Studies1(2), 18-39.

[viii] Samuel, Sigal. (2024, May 8). Some say AI will make war more humane. Israel’s war on Gaza shows the opposite. Vox. Accessed June 22, 2024. https://www.vox.com/future-perfect/24151437/ai-israel-gaza-war-hamas-artificial-intelligence

[ix] Talbot, R. (2020). Automating occupation: International humanitarian and human rights law implications of the deployment of facial recognition technologies in the occupied Palestinian territory. International Review of the Red Cross102(914), 823–849. 

[x] Matafta, M & Leufer, D. (2024, May 9). Artificial Genocidal Intelligence: How Israel is automating human rights abuses and war crimes. Accessed June 22, 2024. https://www.accessnow.org/publication/artificial-genocidal-intelligence-israel-gaza/

[xi] United Nations (2019, Mar 25). Machines Capable of Taking Lives without Human Involvement Are Unacceptable, Secretary-General Tells Experts on Autonomous Weapons Systems. Secretary General/Statement and Messages Accessed June 22, 2024.  https://press.un.org/en/2019/sgsm19512.doc.htm

[xii] Farr, G. V. D. K. (2021). The Campaign To Stop Killer Robots: Legal And Ethical Challenges Posed By Weaponised Artificial Intelligence And Implications For Arms Control Regimes.

[xiii] Gross, J. (2023, September 21). He Drove Into a Creek and Died. His Family Blames Google Maps. The New York Time. https://www.nytimes.com/2023/09/21/us/google-maps-lawsuit-collapsed-bridge.html

[xiv] ICRC. (2018). Ethics and Autonomous Weapon Systems: An Ethical Basis for Human Control? [Report]. Accessed June 22, 2024. https://www.icrc.org/en/document/ethics-and-autonomous- weapon-systems-ethical-basis-human-control. 

[xv] Shereshevsky, Y. (2022). International humanitarian law-making and new military technologies.  International Review of the Red Cross104(920-921), 2131-2152.

[xvi] Murray, D. (2024). Adapting a Human Rights-Based Framework to Inform Militaries’ Artificial Intelligence Decision-Making Processes. Saint Louis University Law Journal68(2), 5.

[xvii] Stewart, R. & Hinds, G. (2023, October 24). Algorithms of war: The use of artificial intelligence in decision making in armed conflicts. Accessed June 22, 2024. https://blogs.icrc.org/law-and-policy/2023/10/24/algorithms-of-war-use-of-artificial-intelligence-decision-making-armed-conflict/

[xviii] Pedron, S. M., & da Cruz, J. D. A. (2020). The future of wars: Artificial intelligence (ai) and lethal autonomous weapon systems (laws). International Journal of Security Studies2(1), 2.

[xix] ÜNVER, (2024) H. A. Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights.

Photo by mohammed al bardawil on Unsplash