ANWAR MHAJNE
Artificial intelligence (AI) has quickly emerged as one of the most transformative digital technologies, and Israel has pioneered its use in military settings. Despite the recent Hamas attacks, which raised doubts about the efficacy of AI-enhanced surveillance, the Israeli military will likely employ a combination of drone-based hacking techniques to pinpoint Hamas targets if they launch a full ground offensive into Gaza. In this way, Israel offers insights into the interplay between technological advancements, international relations, and human rights—an important case study as national and international regulatory bodies grapple with how best to adapt to AI.
Israel’s AI achievements span various domains, from civilian applications to national security. AI-driven advancements in autonomous driving, cyberwarfare, intelligence, and autonomous weapon systems bolster the nation’s military capability. Conversely, the Israeli security industry relies upon its close relationship with the Israel Defense Forces (IDF) to advance research, development, and implementation of new technologies. As part of the “dual feeding” process, leading military technology units in the IDF recruit talented high school graduates for military service, where they receive significant training and experience; upon their release from the army, many join startup companies or establish their own, often in cybersecurity.
We thus cannot separate Israel’s achievements in AI from its occupation of Palestinians. Reports have highlighted testing and deployment of AI surveillance and predictive policing systems in Palestinian territories. In the occupied West Bank, Israel increasingly utilizes facial recognition technology to monitor and regulate the movement of Palestinians. A report by Amnesty International reveals that at heavily fortified checkpoints in Hebron, Palestinians must undergo facial recognition scans, with a color-coded mechanism that guides soldiers on whether individuals should be allowed to proceed, subjected to further questioning, or detained. While Israel has long imposed restrictions and surveillance on Palestinians, AI advancements now enable the IDF to collect extensive data more efficiently.
Israel’s AI technological edge not only helps to perpetuate the occupation, but also has reshaped how it conducts warfare—particularly through its use of drones, or unmanned aerial vehicles (UAVs). Lightweight and cost-effective, the Golden Eagle UAV is a prime example of Israeli AI-enhanced weaponry. Golden Eagle drones use AI to lock onto both static and moving targets and seamlessly track them in real-time, ensuring precise target hits on the ground or in the air, irrespective of lighting conditions.
In 2021, the IDF launched what it referred to as the world’s first AI war: the eleven-day offensive on Gaza known as “Operation Guardian of the Walls” that killed 261 Palestinians and injured 2,200. Israeli military leaders described AI as a significant force multiplier, allowing the IDF to use autonomous robotic drone swarms to gather surveillance data, identify targets, and streamline wartime logistics.
Israel’s use of AI-enhanced weaponry raises ethical questions about how the technology can help facilitate violence and further Palestinian dispossession by reducing Israel’s human cost of conducting warfare. But AI’s impact on human rights is not limited to Palestinians: Israel is one of the world’s largest weapons exporters and Israeli spyware has been used by other oppressive regimes to monitor dissidents. Israel is also invested in developing lethal autonomous weapon systems (LAWS), and has already exported lethal UAVs to Chile, China, India, South Korea, and Turkey.
Israel’s use of AI reveals the consequences of relying on individual states to regulate new technologies and safeguard human rights. A lack of international ethical guidelines and legal frameworks for AI, in other words, will strengthen authoritarian governments. The European Union has been a pioneer in shaping international AI policy, as exemplified by the AI Act. But the EU's approach predominantly focuses on mitigating individual harms stemming from AI, rather than tackling the broader systemic risks it poses to society. Transforming ethical AI from a concept into reality requires an evaluation of these risks—particularly its impact on civil and social rights, including privacy, freedom of expression and association, and consent, as well as other fundamental civil liberties and social freedoms.
Anwar Mhajne is an Assistant Professor of Political Science at Stonehill College where she researches and teaches on gender, religion, cybersecurity, disinformation, and Middle Eastern politics. Follow her on X @mhajneam.
No comments:
Post a Comment