INHUMAIN.AI
The Watchdog Platform for Inhuman Intelligence
Documenting What Happens When Intelligence Stops Being Human
AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 | AI Incidents (2026): 847 ▲ +23% | Countries with AI Laws: 41 ▲ +8 YTD | HUMAIN Partnerships: $23B ▲ +$3B | EU AI Act Fines: €14M ▲ New | AI Safety Funding: $2.1B ▲ +45% | OpenAI Valuation: $157B ▲ +34% | AI Job Displacement: 14M ▲ +2.1M | HUMAIN Watch: ACTIVE 24/7 |

Killer Robots: The Race to Remove Humans from Kill Decisions

An investigation into autonomous weapons systems: Turkey's Kargu-2, Israel's Lavender and Gospel AI targeting systems, the US Replicator initiative, and the global campaign to stop killer robots.

The Line

There is a line in military technology that, once crossed, cannot be uncrossed. On one side of the line, a human being decides who lives and who dies. On the other side, a machine decides. We are crossing that line now. In some cases, we have already crossed it.

Autonomous weapons systems — systems that can select and engage targets without human intervention — are no longer theoretical. They are deployed, operational, and killing people. The debate about whether to build them is over. The debate now is whether to regulate them, ban them, or accelerate their development. And that debate is being lost by those who believe that a machine should never be authorized to end a human life.


What Are Autonomous Weapons?

An autonomous weapon is a weapon system that can identify, select, and engage a target without a human making the final decision to fire. This is distinct from automated weapons (which follow pre-programmed rules, like a landmine) and remotely operated weapons (which are controlled by a human operator, like a drone piloted from thousands of miles away).

The critical distinction is the decision to kill. In a remotely operated system, a human sees the target, evaluates the context, considers the rules of engagement, and presses a button. In an autonomous system, the machine performs some or all of those functions. The human may be “in the loop” (making the final decision), “on the loop” (monitoring the system with the ability to intervene), or “out of the loop” entirely.

The Campaign to Stop Killer Robots, a coalition of over 250 non-governmental organizations across 70 countries, defines the key issue as systems where “meaningful human control” over the use of force is absent. What constitutes “meaningful” human control is itself a contested question, and that ambiguity is part of the problem.


The Kargu-2: The First Autonomous Kill?

In March 2021, a United Nations Panel of Experts report on the Libyan civil war described an incident that may represent the first documented case of an autonomous weapon attacking human targets without explicit human authorization to do so.

The weapon was the STM Kargu-2, a small, rotary-wing drone manufactured by the Turkish defense company STM. The Kargu-2 is classified as a loitering munition — a weapon that can fly to a target area, loiter until a target is identified, and then strike. It is designed for both manual operation and autonomous engagement.

According to the UN report, Kargu-2 drones were deployed against forces loyal to Khalifa Haftar’s Libyan National Army during a retreat. The report stated that the drones were programmed to attack targets without requiring data connectivity between the operator and the munition, suggesting that the engagement decision was made by the drone’s onboard systems rather than by a human operator.

STM disputed this characterization, stating that the Kargu-2 requires human authorization for engagement. The UN Panel’s report did not resolve this dispute definitively. But the incident crystallized a reality that the arms control community had been warning about for years: autonomous lethal engagement is not a future possibility. It is a present capability, deployed in active conflict, with ambiguous accountability.

The Kargu-2 is a small, inexpensive drone. It is not the most sophisticated autonomous weapons platform in development. But it represents the democratization of autonomous killing: a weapon cheap enough for non-state actors to acquire, small enough to evade conventional air defenses, and capable enough to select and strike human targets without a human in the loop.


Israel’s Lavender and Gospel: AI-Targeted Warfare at Scale

The most extensive documented use of AI in targeting decisions has occurred in Israel’s military operations in Gaza. Investigations by Israeli-Palestinian outlet +972 Magazine and other journalistic sources have described two AI systems — Lavender and Gospel — that fundamentally altered the relationship between human judgment and lethal force.

The Gospel System

Gospel is an AI system used by the Israel Defense Forces to generate targets for bombing campaigns. According to reporting, Gospel processes surveillance data, signals intelligence, and other inputs to identify buildings, structures, and locations associated with militant activity. The system dramatically increased the rate at which targets could be generated — from a process that previously took days or weeks to one that could produce targets in near real-time.

The acceleration of target generation had direct consequences for the pace and scale of bombardment. More targets meant more strikes, which meant more destruction, which meant more civilian casualties. The question of whether Gospel’s targeting recommendations were subject to meaningful human review — whether the officers authorizing strikes based on Gospel’s outputs understood the basis for those recommendations and independently evaluated them — is contested and largely unverifiable from outside the IDF’s command structure.

The Lavender System

Lavender, as described by sources within the Israeli military speaking to journalists, is an AI system that generates lists of suspected militants based on pattern-of-life analysis. The system assigns individuals a rating indicating the probability that they are affiliated with a militant organization. According to reporting, the system was used to generate tens of thousands of potential targets.

The most consequential aspect of the Lavender system, as described by sources, was the relationship between the AI’s assessment and the human decision to authorize a strike. Sources indicated that human review of Lavender’s targeting recommendations was, in practice, minimal — sometimes taking as little as twenty seconds per target. The AI generated the list; humans rubber-stamped it.

If this characterization is accurate, Lavender represents not the theoretical future of AI-targeted warfare but its present reality: a system in which the decision to kill a specific individual is made, in substance if not in form, by an algorithm, with human involvement reduced to a procedural formality insufficient to constitute meaningful oversight.


The US Replicator Initiative

The United States Department of Defense launched the Replicator initiative in August 2023 with an explicit objective: to field large numbers of autonomous systems across multiple domains — air, land, sea, subsea — within 18 to 24 months. The initiative was framed as a response to China’s numerical military advantage and a strategy for maintaining US military superiority through technological rather than numerical means.

Replicator’s first tranche focused on autonomous aerial vehicles, autonomous surface and subsurface vessels, and autonomous ground systems. The initiative emphasized speed of deployment, cost-effectiveness, and the ability to operate in contested environments where communications with human operators may be degraded or denied.

The phrase “attritable autonomous systems” recurs throughout Replicator planning documents. “Attritable” means cheap enough to lose. The concept is to field autonomous systems in such numbers and at such low cost that individual losses are acceptable — the military equivalent of disposable infrastructure. This approach has obvious implications for the threshold at which force is used: systems that are cheap and replaceable face lower institutional barriers to deployment than systems that represent significant capital investment.

The Replicator initiative does not explicitly authorize autonomous lethal engagement without human control. The Department of Defense Directive 3000.09, updated in 2023, requires that autonomous weapons systems be designed to allow human judgment over the use of force. But the directive’s language is ambiguous, and the operational realities of deploying autonomous systems in communications-denied environments create scenarios where meaningful human control may be impossible.


The Global Landscape

The United States, China, Russia, Israel, Turkey, and dozens of other countries are developing autonomous weapons capabilities. The technology is advancing along multiple parallel tracks: loitering munitions that can autonomously identify and strike targets, autonomous swarm systems that coordinate without centralized human control, AI-assisted targeting systems that accelerate the kill chain, and autonomous defensive systems (like missile defense and counter-drone systems) that must react faster than human decision-making allows.

China

China has invested heavily in autonomous weapons research, including swarm drone technology, autonomous naval vessels, and AI-assisted command-and-control systems. The People’s Liberation Army has described AI as a force multiplier that could offset US advantages in training and technology. Chinese military doctrine increasingly emphasizes “intelligentized warfare” — warfare in which AI systems play central roles in decision-making, logistics, and engagement.

Russia

Russia has developed a range of autonomous and semi-autonomous weapons platforms, including the Uran-9 unmanned ground combat vehicle and the Poseidon autonomous nuclear torpedo. Russian military doctrine has emphasized the development of autonomous systems as a strategic priority, and Russian officials have stated publicly that they will not agree to a ban on autonomous weapons.

Middle Powers and Non-State Actors

The proliferation of autonomous weapons technology to middle powers and non-state actors represents a qualitatively different challenge than great-power development. Countries like Turkey, Iran, and the UAE are developing and exporting autonomous-capable systems at price points accessible to a far wider range of buyers. The Kargu-2 incident in Libya demonstrated that autonomous weapons are not the exclusive preserve of great powers.


The Campaign to Stop Killer Robots

The Campaign to Stop Killer Robots, launched in 2013, has been the most prominent civil society effort to preemptively ban autonomous weapons. The campaign, which includes organizations like Human Rights Watch, the International Committee for Robot Arms Control, and dozens of national advocacy groups, has called for a legally binding international treaty prohibiting autonomous weapons systems that lack meaningful human control over the use of force.

The campaign has achieved significant diplomatic traction. Over 100 countries have engaged with the issue through the United Nations Convention on Certain Conventional Weapons (CCW), and a growing number of states have called for a preemptive ban. The UN Secretary-General has endorsed the call for a ban. The International Committee of the Red Cross has called for new legally binding rules.

But the campaign has also encountered the structural obstacle that bedevils most arms control efforts: the states most actively developing the technology are the states least willing to ban it. The United States, Russia, China, Israel, and other leading military powers have consistently resisted binding prohibitions, arguing that autonomous weapons can be developed and deployed in compliance with existing international humanitarian law.

The CCW negotiations have been characterized by procedural delays, definitional disputes, and the requirement for consensus that allows any single state to block progress. After more than a decade of discussions, no legally binding instrument has been adopted. The technology, meanwhile, has continued to advance.


International Humanitarian Law

Existing international humanitarian law (IHL) requires that the use of force satisfy principles of distinction (between combatants and civilians), proportionality (between military advantage and civilian harm), and precaution (taking feasible steps to minimize civilian casualties). These principles assume a human decision-maker capable of judgment, contextual understanding, and moral reasoning.

The fundamental legal question is whether an autonomous system can satisfy these requirements. Can an algorithm distinguish a combatant from a civilian in the fog of war? Can it assess proportionality in a context-dependent situation where the relevant factors include cultural knowledge, emotional intelligence, and ethical judgment? Can it take precautions that require the kind of situational awareness that humans acquire through experience and empathy?

Proponents of autonomous weapons argue that AI systems can, in principle, make more accurate targeting decisions than stressed, fatigued, emotionally compromised human soldiers. They point to cases of human error, civilian casualties caused by poor judgment, and the potential for AI to reduce overall harm by making warfare more precise.

Opponents argue that accuracy of targeting is not the same as lawfulness of targeting. International humanitarian law requires judgment, not just accuracy. A system that can identify a target correctly 99.9% of the time but cannot evaluate whether striking that target is proportionate in context is a system that is accurate but not lawful. The 0.1% error rate, applied across thousands of engagements, produces hundreds of unlawful killings with no accountable decision-maker.

Accountability

The accountability question is perhaps the most intractable. If an autonomous weapon kills a civilian unlawfully, who is responsible? The soldier who deployed it? The officer who authorized its deployment? The engineer who designed its targeting algorithm? The company that manufactured it? The government that procured it?

In existing legal frameworks, criminal liability for unlawful use of force attaches to the person who made the decision to use force. If no person made that decision — if the decision was made by an algorithm — the chain of accountability is broken. This is not a gap that can be fixed by rewriting regulations. It is a structural consequence of removing human decision-making from the use of lethal force.


The Arms Race Dynamic

The most dangerous aspect of autonomous weapons development is not any individual system but the competitive dynamic driving development. Each state that advances autonomous weapons capability creates pressure on its adversaries to do the same. Each deployment creates a new baseline that competitors must match or exceed.

This dynamic is self-reinforcing. No state wants to be the first to deploy fully autonomous weapons. But no state wants to be the last to develop the capability. The result is a race in which every participant moves forward while insisting that they are merely keeping pace.

Arms races are not deterministic. They can be managed, constrained, and in some cases reversed by treaties, norms, and institutional arrangements. The Chemical Weapons Convention, the Biological Weapons Convention, and the Treaty on the Non-Proliferation of Nuclear Weapons all demonstrate that states can, under the right conditions, agree to constrain their own military capabilities.

But those agreements were reached, in most cases, after the catastrophic consequences of unregulated development had become undeniable. The question is whether the international community can regulate autonomous weapons before the catastrophe, rather than after.


What Comes Next

The trajectory is clear. Autonomous weapons systems are becoming more capable, more affordable, more widely available, and more deeply integrated into military operations worldwide. The window for preemptive regulation is narrowing. The diplomatic mechanisms that might produce binding constraints are moving at the pace of international diplomacy while the technology moves at the pace of Silicon Valley.

What remains uncertain is whether the deployment of autonomous weapons will be bounded by meaningful human control or whether the logic of military competition will push that control to the margins and eventually beyond them. The technology does not determine the outcome. Human choices determine the outcome. But those choices are being made in an environment of competitive pressure, strategic uncertainty, and institutional inertia that has historically favored escalation over restraint.

The machines are ready. The question is whether we are ready for what happens when we let them decide.