An investigation by Autominous, drawing on testimony from three former US military intelligence analysts and documents reviewed under condition of anonymity, reveals the operational reality of AI-assisted targeting in recent conflicts.
The system, internally referred to as Lavender, used machine learning to analyse surveillance data, communications intercepts, and pattern-of-life information to generate ranked lists of suspected militants. At peak operational tempo, it produced approximately 100 target recommendations per day.
Human reviewers were allocated an average of 20 seconds per target to approve or reject the recommendation. At that pace, the review process was structurally incapable of providing the meaningful human oversight required under the laws of armed conflict.
"You couldn't review anything in 20 seconds," said one former analyst who operated the system. "You could look at the name, glance at the confidence score, and approve. That was the process. The machine decided. We rubber-stamped."
The confidence scores assigned by the system ranged from 0 to 100. Sources told Autominous that targets with scores as low as 50 — meaning the system itself estimated only a coin-flip probability that the individual was correctly identified — were routinely approved when operational pressure was high.
The Department of Defense did not respond to detailed questions about Lavender's operational parameters. A spokesperson provided a general statement: "The Department of Defense employs AI-enabled tools to support decision-making across a range of operations. All targeting decisions are made by authorised personnel in accordance with the law of armed conflict and applicable rules of engagement."
International humanitarian law requires that targeting decisions involve a genuine assessment of military necessity and proportionality. Legal scholars say a 20-second review window makes such assessment impossible.
"This is not human-in-the-loop," said Professor Eleanor Chambers of the Oxford Institute for Ethics in AI. "This is human-in-the-way. The system is designed so that the human presence satisfies the legal checkbox without providing the substantive judgment the law requires."
What we know for certain
An AI targeting system was used operationally to generate kill recommendations. Human review times averaged approximately 20 seconds per target. Targets with low confidence scores were approved.
What we are inferring
The review process was designed to satisfy legal requirements for human oversight rather than to provide genuine evaluative judgment. The operational tempo made meaningful review structurally impossible.
What we couldn't verify
The total number of strikes carried out based on Lavender recommendations, the civilian casualty count attributable to the system, or whether the system remains in active use. The DoD did not respond to specific questions.