Gospel truth: Israel turns AI into civilian killing machine in Gaza

Israeli army's deployment of the Habsora system in the besieged Palestinian enclave marks a pivotal shift in the role of technology in armed conflict, raising profound questions about the future of warfare, ethics, and international law.

Israeli army introduced an unregulated AI system named Habsora, also referred to as the 'The Gospel'. / Photo: Reuters
Reuters

Israeli army introduced an unregulated AI system named Habsora, also referred to as the 'The Gospel'. / Photo: Reuters

In the autumn of 2023, the landscape of modern warfare underwent a significant transformation — with artificial intelligence (AI) firmly becoming part of the war machinery.

AI was once a realm of fantasy, evolving exponentially during the last decade. Yet, it was never used en masse during strategic warfare.

As of October 2023, the world bore witness to not only the evolutionary transformation of AI as a strategic and military tool of warfare but also a revolutionary one.

From our television screens, handheld phones, and computers, the evidence is clear: that AI, when left unregulated by a state during warfare and without any consideration of its ethical dimensions, produces horrifying, detrimental and indiscriminate consequences. Leaving destruction, death and rampant devastation in its wake.

Following its land invasion into the besieged Palestinian enclave of Gaza, the Israel army introduced a groundbreaking and unregulated AI system named Habsora, also referred to as the 'The Gospel'.

This deployment marked a pivotal shift in the role of technology in armed conflict, raising profound questions about the future of warfare, ethics, and international law.

The utilisation of Habsora not only redefined the operational capabilities of the Israeli army but also spotlighted the intricate interplay between technological innovation, military strategy, regulation and the removal of ethical implications during times of war.

System at the vanguard

Habsora's inception was a direct response to the challenges encountered by the Israeli army in previous conflicts in Gaza. Launched in October 2023, this AI system revolutionised the Israeli army's target acquisition process.

While conventional intelligence operations might yield approximately 50 viable targets in Gaza over a year, Habsora's machine-learning algorithms are said to have helped identify up to 100 targets daily.

By analysing vast data arrays, the Israeli army claims the system could discern patterns linking specific locations to Hamas operatives while estimating potential civilian casualties preemptively.

Yet, none of these claims have been sustained by neutral international bodies.

In addition, as the world bears witness to the rising and rampant death toll that the Israeli army has inflicted upon innocent civilians of Gaza and their homeland, it seems logical to say that Israel's tall claims about its AI weaponry's fool-proof system have fallen flat. Or they simply do not care who is hurt and how.

Habsora has operated on the principles of probabilistic reasoning, a hallmark of advanced machine learning.

By cross-referencing enormous data sets, it is believed to identify patterns consistent with enemy combatant behaviour. The Israeli army once more claimed that this efficiency not only accelerated their targeting process but also purportedly enhanced the precision of airstrikes.

The strategic narrative echoed to the world was Israel's repeated claims of its intentions to minimise collateral damage and civilian casualties – a claim that was both heralded for its potential and scrutinised for its actual effectiveness in the fog of war.

As the world watches the gut-wrenching scenes emerging from Gaza, the carpet bombardment of residential areas, the indiscriminate attacks on schools, hospitals and UN refugee camps, one does more than wonder about the ineffectiveness of these "precision attacks".

It seems more than justified with the evidence before us that the Israeli army's strategic narrative within this information war – that of minimising collateral damage and civilian casualties – is nothing more than a tool in their communication toolbox.

Ethical and moral questions

Turning directly to the ethical and strategic implications of the use of AI, we must focus on two core pillars fundamental to analysing the use of AI during war. That of speed and precision.

The Israeli army has claimed that the integration of Habsora into their operations exemplified a significant escalation in the speed and precision of warfare. They proudly and loudly proclaimed to the world that the AI system's rapid processing and decision-making capabilities outstripped traditional human-led intelligence efforts.

However, evidence of any of these claims has yet to be proven by neutral international bodies and independent journalists.

All the world is paying witness to is the rapidly increasing death toll of thousands of innocent Palestinian civilians – in addition to the thousands of others maimed, hundreds of thousands now suffering from starvation and diseases, millions displaced, and homes, schools, hospitals, mosques and entire villages destroyed. Not to mention the forever-embedded trauma of living.

Additionally, we must remember that the question of speed within AI strategic weaponry comes with its own set of ethical dilemmas.

Namely, the inability of this weaponry to precisely make the distinction between combatants and non-combatants -- a cornerstone of international humanitarian law inscribed in the principle of distinction under the Geneva Conventions.

But this is now being made by algorithms, whose decision-making processes are neither transparent nor infallible. While also being highly discriminatory to the "targets" they choose.

With the evidence consistently presented to us since the Israeli army's ground invasion of Gaza on the use and misuse of AI by the Israeli state, one could claim that they are testing their new AI machinery and military tactics by using the people of Gaza as their test subjects.

It is a bold claim to make, but not without evidence. We must remember that our trust in technology is a double-edged sword.

In some instances, it brings billions of people together to fight occupation and resistance. To have their voice heard. To take action and to mobilise the ordinary citizen on the street, who is now learning and educating themselves about these abuses through the device in their hand.

However, the good side of technology does not diminish the bad. Pessimistically, we would say it does not even dilute it.

The reliance on AI like Habsora has raised, and should continue to grow through international courts and multilateral bodies, critical questions about the extent to which military commanders and personnel should be trusted with machine-generated recommendations.

From what we have seen so far, the answer to anyone with open eyes would be a big, resounding 'no'. We have witnessed that the system, named after a term connoting infallibility, could inadvertently encourage an overreliance on technological suggestions. Indeed, it is not hyperbolic to say this is exactly what we see during this unidirectional war.

The trust that certain states place in AI-generated data while enhancing operational efficiency has obscured the nuanced human judgement that is essential in warfare.

This is most notable in urban settings like Gaza, where the civilian population is densely intermingled with combatants. When we focus on the broader context of AI in contemporary warfare, we seek to analyse the redefining military operations we are all paying witness to on our screen.

We can see clearly that the use of Habsora was not an isolated phenomenon. Not at all. It was, and it is, entrenched in a broader trend where major militaries worldwide are increasingly incorporating AI into their operational frameworks.

From intelligence gathering to autonomous weapon systems, AI is reshaping the character of conflicts, making entry into engagements more feasible and altering traditional concepts of deterrence and strategy.

Warning for humankind

This is no longer a forecast but a warning.

AI is here to stay, deeply entrenched in global warfare and growing and evolving at breakneck speed.

This tidal wave of AI evolution during warfare, as demonstrated by Israel's unidirectional war in Gaza, has once more highlighted to the international community at large the urgent need for a robust regulatory and ethical framework.

The dynamic nature of AI and machine learning has, without doubt, complicated, twisted and convoluted the establishment of clear legal and moral guidelines when it comes to regulatory and ethical concerns and challenges during times of war and otherwise.

Based on those foundational statements, there is a pressing need for international discourse and agreement on the boundaries and oversight of AI in military applications.

Israel's use of AI in the 2023 Gaza war has set a precedent in the realm of technologically enhanced warfare. Habsora's deployment illustrates the formidable potential of AI to transform military strategies, bringing unparalleled efficiency and capabilities.

However, it also underscores the complexity of integrating such advanced technologies into combat operations, where ethical, legal, and humanitarian considerations are paramount.

The case of Habsora, therefore, stands as a critical point of reflection for military strategists, policymakers, and the international community as they grapple with the emerging realities of AI in warfare.

Route 6