One military expert states that back in 1999, the U.S military has 6 layers and more than 20 professional personnel to validate a target. For example, one personnel study maps, one study photographs, and one study intelligence. Once the target gone through all of these layers, then it becomes valid. This process may take several days to months. However, AI solved this problem and instantly finds a target. While the AI boost productivity in ‘what it does’, there also perils in ‘how it does’. This is a concerning situation because it is a matter of life and death in wars.
How AI Works?
AI is remarkably effective at identifying patterns and making predictions, but it can cause serious problems when misused. Social media platforms offer a clear illustration: these sites rely on algorithms to determine which advertisements and content to surface for each user. When those algorithms are designed with a single-minded focus on generating clicks, the result is a platform flooded with cheap, sensational content. Engagement numbers climb, but the actual experience for users steadily deteriorates.
The problem, however, is not the algorithm itself, it is how people design and deploy them. There are two fundamental weaknesses that anyone working with algorithms needs to understand.
The first is that algorithms are brutally literal. They execute their instructions to the letter, nothing more. A person given the goal of “maximizing quality as measured by clicks” would intuitively understand that quality is the real objective and clicks are simply the yardstick. An algorithm makes no such inference, it chases clicks, period, and even if doing so completely undermines the quality it was supposed to reflect.
The second weakness is that algorithms are opaque. They can be extraordinarily accurate at forecasting outcomes, but they offer no explanation for why those outcomes occur. An algorithm might reliably predict which news articles will be widely shared on social media, for example, but it cannot tell you what actually motivates people to share them. The prediction arrives without the reasoning behind it, a correct answer with no understanding attached.Modern military AI systems are, at their core, sophisticated algorithms and they inherit the same fundamental flaws.
From Ukraine to Gaza to Iran
Proponents of military AI emphasize its potential to reduce the fog of war, to process more information faster, and to improve the accuracy of targeting. There is genuine evidence for this capability. Ukraine employed facial recognition to identify Russian soldiers, used AI to analyze intelligence and plan operations, and equipped long-range drones with AI capable of autonomously identifying terrain and military targets. Meanwhile, Russia used AI to conduct cyber, locate targets, analyse intelligence, and drone attacks.
In Gaza, Israel deployed several AI-powered systems during its military campaign from October 2023 onwards. ‘The Gospel‘ (Habsora) used machine learning producing as many as 100 targets per day where human analysts would previously have identified around 50 per year. Another system, ‘Lavender,’ ranked the entire male population of Gaza by probability of militant affiliation, flagging 37,000 individuals for potential targeting. The difference in both systems is that Gospel targets residential commercial, or industrial structures while Lavender targets people.
But accuracy at scale contains a troubling paradox. Even a small error rate becomes catastrophic when multiplied across tens of thousands of targeting decisions made at machine speed. The Lavender system used by Israel in Gaza carried an acknowledged error rate of approximately 10 percent. This was considered acceptable by military commanders. Yet, it meant that thousands of individuals were algorithmically designated for targeting despite having no actual connection to Hamas.
More disturbing was the human oversight that accompanied these algorithmic recommendations. Intelligence officers told investigators they spent an average of approximately 20 seconds reviewing each Lavender recommendation before approving a strike. One officer candidly admitted We were not interested in how the machine arrived at its conclusions. We only wanted to know if the target was male‘
This phenomenon (known in psychology as ‘automation bias’) describes the tendency of humans operating under high-stress, high-volume conditions to trust machine output over their own judgment. When a computer presents a targeting recommendation with apparent confidence, and a soldier has only seconds to respond before moving to the next, the capacity for independent critical thinking is severely diminished.
The most widely cited example of AI targeting failure in the Iran war occurred on February 28, 2026, the opening day of hostilities. A missile struck the Shajar-e Tayyeb elementary school for girls in Minab, Hormozgan province, while school was in session. As survivors sheltered, a second missile (in what is known as a ‘double-tap’ strike) hit the same location. The combined strikes killed approximately 175 people, the majority of them young girls.
Many AI targeting systems function as ‘black boxes’. Even their developers cannot fully explain how a particular output was generated. When a strike based on an algorithmic recommendation kills civilians, the chain of legal and moral accountability becomes diffused. This structural diffusion of responsibility represents an unprecedented challenge to the laws of armed conflict.
Can AI companies do anything?
Anthropic’s case is notable in the current warfare. Founded on the ‘safety first’ principle, Anthropic entered into a partnership with the U.S. government’s military apparatus in June 2024. Claude (Anthropic’s primary AI system) was integrated into intelligence analysis, operational planning, cyber operations, and modeling and simulation for the Department of Defense.
However, the agreement was restricted by two explicit conditions: Claude could not be used for mass domestic surveillance of American citizens because existing privacy law was not designed to govern, and it could not be deployed in fully autonomous weapons systems capable of making lethal decisions without human involvement.
On autonomous weapons, Anthropic’s position was both ethical and technical. The company argued that current AI technology is simply not reliable enough to be trusted with lethal decisions. The history of AI targeting errors (and the civilian casualties that followed) supported this caution. Deploying unreliable AI in autonomous weapons would endanger American warfighters, not protect them.
Returning to how AI is a problem?
Because the AI is literal and opaque, it is very danger to deploy it in warfare. AI can identify as many as targets as possible while undermining the quality it was supposed to reflect. Secondly, AI can identify target without explanation why the target is chosen.
Anthropic’s case highlights that AI companies who developed and managed these systems can also control what they can offer to its clients. It is time for the AI companies to decide whether they should remain ethical and historical or just become another company.
Even if AI companies take a stronger position to avoid the exploitation of their AI models, the governments could hire individual AI scientists to develop such models for them. Therefore, the entire tech community should take a strong stance to protect their AI models or their creations from being misused.


