AI at War: The Military Applications of Inhuman Intelligence
Comprehensive investigation of AI in military applications: autonomous weapons (LAWS), Project Maven, Palantir, Israel's Lavender and Gospel systems, drone swarms, nuclear C2 risks, the absence of international treaties, private military AI, and HUMAIN's potential defense applications.
Artificial intelligence has gone to war. This is not a prediction, a warning, or a scenario from a policy paper. It is a statement of fact. AI systems are being used in active military operations today — to identify targets, to guide munitions, to coordinate drone swarms, to conduct cyber operations, and to make decisions about who lives and who dies. In some cases, the role of the human in these decisions has been reduced to a formality.
The military application of AI represents the most consequential and least governed dimension of the technology’s development. The AI systems being deployed in warfare today have received a fraction of the safety testing, ethical scrutiny, and public oversight applied to commercial AI products. A chatbot that generates offensive text triggers regulatory investigations. An AI system that generates target lists for airstrikes operates with minimal external accountability.
This is not an oversight. It is a feature of how military AI has been developed and deployed: rapidly, under classification, by organizations that prioritize operational advantage over transparency, and in a regulatory environment where international law has failed to keep pace with technological capability.
The Spectrum of Military AI
Military AI is not a single technology. It spans a spectrum from decision-support tools that augment human judgment to fully autonomous weapons that can select and engage targets without human intervention.
| Category | Human Role | Current Status | Examples |
|---|---|---|---|
| Decision support | AI provides analysis; humans decide and act | Widely deployed | Intelligence analysis, logistics optimization |
| Human-on-the-loop | AI recommends actions; humans approve or veto | Operational | Target recommendation systems, defensive systems |
| Human-supervised autonomy | AI acts autonomously within defined parameters; humans monitor | Emerging | Defensive missile systems, some drone operations |
| Full autonomy | AI selects and engages targets without human approval | Limited deployment; legal gray area | Some missile defense systems, loitering munitions |
The boundaries between these categories are not sharp, and the direction of movement is unmistakably toward greater autonomy. Each successive generation of military AI system requires less human involvement, operates faster, and makes decisions of greater consequence.
Project Maven: Where It Started
Project Maven — officially the Algorithmic Warfare Cross-Functional Team — was established by the US Department of Defense in April 2017. Its original mission was narrow: use computer vision AI to analyze drone surveillance footage, automating the labor-intensive process of reviewing thousands of hours of video to identify objects, people, and activities of interest.
The project became a public controversy in 2018 when approximately 3,000 Google employees signed a petition protesting Google’s involvement, arguing that the company should not be in the business of building war technology. Google subsequently announced it would not renew its Maven contract and published AI principles that included a commitment not to develop AI for weapons.
But Project Maven did not end with Google’s departure. The contract was picked up by other companies, and the project expanded significantly. By 2023, Maven had evolved from a video analysis tool into a broader AI-enabled intelligence platform used across multiple military contexts.
Project Maven’s significance is not primarily technical. It is institutional. Maven demonstrated that the Department of Defense could rapidly integrate commercial AI technology into military operations, bypassing the traditional defense acquisition process that can take years or decades. It established the template for military AI procurement: start with a commercial AI capability, adapt it for military use, deploy it quickly, and iterate.
Palantir: The Intelligence-Industrial Complex
Palantir Technologies, founded in 2003 with seed funding from the CIA’s venture capital arm In-Q-Tel, has become the most prominent and controversial private company in military AI. Palantir’s platforms — Gotham for intelligence agencies, Foundry for commercial clients, and the AI Platform (AIP) launched in 2023 — provide data integration, analysis, and AI-assisted decision-making capabilities that are used by military and intelligence organizations worldwide.
Palantir’s military AI capabilities include:
- Intelligence fusion: Integrating data from multiple sources (signals intelligence, geospatial intelligence, human intelligence, open source intelligence) into unified analysis platforms
- Targeting support: AI-assisted identification and prioritization of military targets
- Battle management: Coordination of forces and assets across operational domains
- Logistics optimization: AI-driven supply chain and force deployment planning
- Cyber operations: AI-assisted network defense and vulnerability analysis
Palantir’s CEO, Alex Karp, has been unapologetic about the company’s military work, arguing that Western democracies must maintain technological superiority over authoritarian adversaries and that companies unwilling to support national defense are shirking a civic obligation. This position has made Palantir both a major defense contractor and a lightning rod for criticism.
Palantir’s Maven Smart System (MSS) won the Army’s contract for Project Maven in 2019 and has expanded its role across multiple defense programs. The company’s revenue from government contracts exceeded $1.5 billion in 2024, with military and intelligence work representing the majority.
The Palantir model — a private company building AI systems that directly influence military targeting and intelligence analysis — raises profound questions about accountability. When a government employee makes a targeting decision using Palantir’s AI, who is responsible for errors? The operator? The commanding officer? The company that built the system? The engineers who trained the model? Current legal frameworks do not provide clear answers.
Israel: AI in Active Warfare
Israel has become the world’s most advanced practitioner of military AI deployment, and the most consequential case study of what happens when AI systems are used in large-scale military operations against a civilian population.
The Lavender System
Lavender, as reported by journalists in 2024 based on interviews with Israeli intelligence officials, is a machine learning system that generates lists of suspected militants for targeting. The system reportedly analyzes communication patterns, social connections, behavioral indicators, and other data to assign individuals a score indicating their likelihood of being affiliated with Hamas or Palestinian Islamic Jihad.
According to these reports, Lavender identified approximately 37,000 Palestinians as suspected militants in the early weeks of the Gaza conflict that began in October 2023. Israeli intelligence officials who spoke to journalists described a process where Lavender’s recommendations were approved by human operators, but the approval process was described as cursory — sometimes taking as little as 20 seconds per target.
The implications are severe. If an AI system generates 37,000 targeting recommendations and human review takes 20 seconds each, the “human in the loop” is performing a rubber-stamp function, not a meaningful review. The legal requirement for human oversight — enshrined in international humanitarian law, which requires that targeting decisions involve human judgment about proportionality, military necessity, and distinction between combatants and civilians — becomes a procedural fiction.
The Gospel System
Gospel (or “Habsora” in Hebrew) is a separate AI system used by the Israel Defense Forces to identify buildings, structures, and infrastructure for targeting. Where Lavender identifies individuals, Gospel identifies physical targets — command centers, weapons storage facilities, tunnel entrances, and other infrastructure that the IDF classifies as military targets.
Gospel reportedly processes intelligence data — satellite imagery, signal intercepts, agent reports, drone footage — to generate target recommendations at a rate far exceeding human analytical capacity. Officials have described it as a “mass assassination factory” and a “target factory” that can generate targets faster than the military can strike them.
The Broader Pattern
Israel’s military AI deployment raises questions that extend far beyond the Israeli-Palestinian conflict:
Proportionality distortion. International humanitarian law requires that military strikes be proportionate — that the anticipated military advantage must not be excessive in relation to the expected civilian harm. When AI systems generate targets at industrial scale and speed, the proportionality calculation becomes institutionally distorted. The system’s efficiency in generating targets creates pressure to use them.
Accountability gap. When an AI system recommends a target, a human operator approves it, and a pilot or drone operator executes the strike, who bears legal responsibility if the target turns out to be a civilian? The diffusion of decision-making across AI recommendations, human approval, and operational execution makes it extraordinarily difficult to assign individual criminal responsibility under existing international humanitarian law.
Precedent setting. Israel’s use of AI in warfare establishes precedents that every other military will follow. If AI-assisted targeting with minimal human review is accepted in this conflict, it becomes the operational norm for future conflicts. The standards being set in Gaza will shape military AI deployment worldwide for decades.
Drone Swarms: The Next Frontier
Autonomous drone swarms — coordinated groups of unmanned aerial vehicles that communicate and make tactical decisions collectively without human control of individual units — represent the next frontier in military AI. Multiple nations have demonstrated drone swarm capabilities, and the technology is transitioning from demonstration to operational deployment.
Capability Landscape
| Nation | Program | Status | Key Capabilities |
|---|---|---|---|
| United States | DARPA OFFSET, Replicator initiative | Advanced development, initial deployment | Collaborative autonomy, urban operations |
| China | Multiple PLA programs | Advanced development | Large-scale swarms (1,000+ units demonstrated) |
| Israel | Rafael, Elbit systems | Operational | Combat-proven autonomous loitering munitions |
| Turkey | Baykar, STM | Operational | Kargu-2 reportedly used autonomously in Libya |
| UK | DSTL programs | Development | Collaborative swarm AI |
| Russia | Kronshtadt, Kalashnikov | Development (limited by sanctions) | Reconnaissance and strike |
| Iran | IRGC programs | Operational | Low-cost mass production, used by proxies |
The Kargu-2 Precedent
In March 2021, a UN Panel of Experts report on the Libyan civil conflict described an incident in which STM Kargu-2 loitering munitions — small, AI-equipped drones designed to autonomously identify and engage targets — were used against retreating forces. The report stated that the drones were programmed to attack targets without requiring operator input.
This incident, if accurately described, represents the first documented case of an autonomous weapon system attacking humans without explicit human authorization. The Turkish government and STM disputed some aspects of the report, but the incident highlighted the reality that autonomous weapons are no longer hypothetical.
The Replicator Initiative
The US Department of Defense’s Replicator initiative, announced by Deputy Secretary of Defense Kathleen Hicks in August 2023, aims to field “multiple thousands” of autonomous systems within 18-24 months to counter China’s numerical advantages in conventional forces. Replicator explicitly prioritizes small, cheap, autonomous systems that can be produced at scale — a deliberate contrast to the expensive, exquisite platforms that have characterized American military procurement.
Replicator represents a strategic bet that AI-enabled autonomous systems can offset China’s advantages in personnel and conventional platforms. If successful, it will fundamentally change the character of military competition. If it introduces autonomous weapons at scale without adequate safety mechanisms, it will fundamentally change the character of warfare.
Cybersecurity: AI as Sword and Shield
AI is transforming cybersecurity in both offensive and defensive roles, and the military implications are profound.
Defensive applications include AI-driven network monitoring (detecting anomalous patterns that indicate intrusion), automated vulnerability scanning, threat intelligence analysis, and incident response automation. These applications are relatively uncontroversial and widely deployed across military and civilian networks.
Offensive applications are more concerning. AI can automate the discovery of zero-day vulnerabilities, generate sophisticated phishing attacks, create deepfakes for social engineering, and coordinate distributed cyber operations at machine speed. The integration of AI into military cyber operations means that cyberattacks can be launched faster, at greater scale, and with more sophisticated targeting than human operators alone could achieve.
The most dangerous scenario involves AI-on-AI cyber conflict: offensive AI systems probing defensive AI systems at speeds that outpace human comprehension. If both sides in a military confrontation deploy AI-driven cyber capabilities, the escalation dynamics could outpace the ability of political leaders to understand what is happening, let alone control it.
Nuclear Command and Control: The Ultimate Risk
The integration of AI into nuclear command and control systems represents the most catastrophic potential application of military AI. The risk is not that AI systems will deliberately launch nuclear weapons. The risk is that AI systems will introduce speed, complexity, and automation into nuclear decision-making in ways that reduce the time available for human judgment and increase the probability of error.
Current Concerns
Early warning systems. AI is being integrated into missile warning and detection systems to reduce false alarm rates and provide faster analysis of potential threats. But any system that influences the assessment of whether a nuclear attack is underway directly influences the decision to respond with nuclear weapons. A false positive in an AI-augmented early warning system could trigger a response before human analysts have time to evaluate the data.
Russia’s Perimeter system. Russia’s Perimeter (known colloquially as “Dead Hand”) is a semi-automated nuclear response system designed to ensure retaliatory capability even if Soviet/Russian leadership is destroyed in a first strike. The system’s exact current configuration is classified, but reports indicate it includes automated components that could potentially initiate launch procedures without explicit human authorization. Any AI enhancement of Perimeter increases the risk of unintended nuclear launch.
Decision compression. Hypersonic missiles, which can reach targets in minutes rather than the 30-minute flight time of traditional ICBMs, compress the window for nuclear decision-making from minutes to seconds. If AI systems are used to accelerate nuclear response decisions to match hypersonic timelines, the practical window for human judgment effectively disappears.
Escalation dynamics. In a crisis between nuclear-armed states, AI systems monitoring military deployments, communications, and operational patterns on both sides could generate assessments that trigger preemptive actions before diplomats have time to de-escalate. The speed advantage of AI becomes a liability when the decisions are irreversible and civilizational.
No One Is Governing This
There is no international treaty, agreement, or even informal understanding governing the role of AI in nuclear command and control. The US and Russia, which together possess approximately 90% of the world’s nuclear weapons, have not conducted bilateral discussions specifically addressing AI in nuclear systems. The broader multilateral framework for nuclear arms control has deteriorated since the US withdrawal from the INF Treaty in 2019 and the suspension of New START inspections.
The absence of governance is not an oversight. It reflects the fundamental difficulty of verifying compliance with restrictions on AI in nuclear systems. Unlike warheads and delivery vehicles, which can be counted and inspected, AI software is invisible, easily modified, and impossible to monitor remotely. Any agreement restricting AI in nuclear systems would require a verification regime that neither side has proposed and that may not be technically feasible.
The Treaty Vacuum: Lethal Autonomous Weapons Systems
The international community has been discussing the regulation of lethal autonomous weapons systems (LAWS) since 2014, when the topic was placed on the agenda of the Convention on Certain Conventional Weapons (CCW). More than a decade later, there is no treaty, no binding agreement, and no meaningful prospect of one.
Timeline of Non-Progress
| Year | Event | Outcome |
|---|---|---|
| 2014 | CCW informal expert discussions on LAWS begin | Discussion; no action |
| 2016 | CCW establishes Group of Governmental Experts (GGE) | Discussion continues |
| 2017-2019 | Multiple GGE sessions | Guiding principles adopted; no binding norms |
| 2021 | GGE proposes normative framework | No consensus |
| 2023 | UN Secretary General calls for ban on LAWS | Advisory; no enforcement |
| 2023 | Bletchley Declaration | Principles; no binding commitments |
| 2024 | Seoul AI Summit | Joint statement; no enforcement |
| 2024 | UN General Assembly resolution on LAWS | Non-binding |
| 2025-2026 | Continued discussions | No binding agreement in sight |
Why Treaties Have Failed
The failure to regulate autonomous weapons has several causes:
Major power opposition. The United States, Russia, Israel, and several other significant military powers have opposed a binding treaty on autonomous weapons. The US position has emphasized “meaningful human control” as a principle while opposing specific prohibitions that might constrain military AI development. Russia has explicitly stated that a ban on autonomous weapons is premature.
Definitional challenges. There is no agreed definition of what constitutes an “autonomous weapon.” Is a mine autonomous? A guided missile? A drone with automatic target recognition? A missile defense system that intercepts incoming warheads without human approval? The spectrum from automated to autonomous is continuous, and drawing a legal line along it has proven impossible.
Verification impossibility. Even if a treaty were negotiated, verifying compliance would be extraordinarily difficult. AI software can be updated remotely. A weapon system’s level of autonomy can be changed with a software update. There is no physical inspection regime that can determine whether a weapon system is autonomous.
Dual-use problem. The same AI technologies used in autonomous weapons — computer vision, reinforcement learning, sensor fusion — are used in civilian applications. Restricting their military application without restricting their civilian development is technically and practically impossible.
Strategic incentive. Nations that believe autonomous weapons provide a military advantage have no incentive to restrict them. The game theory of autonomous weapons mirrors the game theory of nuclear weapons before the Non-Proliferation Treaty: each nation’s rational self-interest is to develop the technology while hoping others will exercise restraint.
Private Military AI: The Mercenary Algorithm
The involvement of private companies in military AI development creates accountability gaps that existing legal frameworks do not address.
Key Private Military AI Companies
| Company | Headquarters | Military AI Products | Key Clients |
|---|---|---|---|
| Palantir | Denver, US | Intelligence fusion, targeting, battle management | US DoD, UK MoD, NATO allies |
| Anduril | Costa Mesa, US | Autonomous surveillance towers, counter-drone systems, submarine drones | US DoD, Five Eyes |
| Shield AI | San Diego, US | Autonomous drone piloting (Hivemind AI) | US military, allies |
| Elbit Systems | Haifa, Israel | Autonomous weapons, drone systems, ISR | Israel, NATO allies |
| Rafael | Haifa, Israel | Iron Dome, autonomous munitions, swarm systems | Israel, export clients |
| Baykar | Istanbul, Turkey | Bayraktar drones (semi-autonomous) | Turkey, Ukraine, 30+ nations |
| L3Harris | Melbourne, US | Autonomous ISR, electronic warfare AI | US DoD |
These companies operate in a regulatory space that is simultaneously militarized and commercialized. They develop technologies with lethal applications, sell them to governments, and are subject to export controls but not to the kinds of safety requirements that apply to, say, pharmaceutical companies developing drugs that could kill people.
Anduril, founded by Palmer Luckey (co-founder of Oculus VR, acquired by Facebook), has been particularly vocal about positioning itself as a Silicon Valley company that embraces military work. Anduril’s Lattice platform provides AI-enabled command and control, surveillance, and autonomous systems coordination. The company’s approach — move fast, iterate rapidly, apply commercial AI development practices to military problems — mirrors the Project Maven template of rapid commercial-to-military technology transfer.
HUMAIN’s Military Potential
Saudi Arabia’s HUMAIN has not publicly announced military applications for its AI infrastructure. But the potential for military use is inherent in the infrastructure being built, and INHUMAIN.AI would be negligent not to address it.
Compute infrastructure. The data center and compute infrastructure HUMAIN is building could support military AI applications as easily as civilian ones. AI-powered surveillance, intelligence analysis, autonomous systems coordination, and cyber operations all run on the same hardware platforms that HUMAIN is deploying for commercial AI.
Partnership network. Several of HUMAIN’s technology partners — including NVIDIA, which supplies GPUs to military AI programs worldwide, and companies with defense divisions — have dual-use products that serve both civilian and military customers.
Strategic context. Saudi Arabia is one of the world’s largest arms importers. The Kingdom has been engaged in military operations in Yemen since 2015, has an active security apparatus that has used surveillance technology against dissidents and journalists, and faces ongoing security threats from regional rivals. The strategic incentive to develop military AI capabilities is substantial.
Governance gap. Saudi Arabia is not a signatory to most international arms control treaties and is not subject to the CCW’s LAWS discussions in a binding sense. There is no international framework that would prevent Saudi Arabia from applying HUMAIN’s infrastructure to military applications, and no transparency requirement that would obligate disclosure.
INHUMAIN.AI does not assert that HUMAIN is developing military AI. We assert that the infrastructure being built is dual-use by nature, that the strategic incentives for military application are real, and that the absence of transparency or governance creates the conditions in which military AI development can occur without accountability.
For ongoing tracking of HUMAIN’s activities, see: HUMAIN Watch.
What International Humanitarian Law Requires — And What It Cannot Enforce
Existing international humanitarian law (IHL) imposes requirements on military operations that are directly relevant to AI:
Distinction. Parties to a conflict must distinguish between combatants and civilians. An AI system used for targeting must be capable of making this distinction reliably. No current AI system has been independently verified to do so with the accuracy and consistency that IHL requires.
Proportionality. Attacks must not cause civilian harm excessive in relation to the anticipated military advantage. This requires a contextual judgment that balances incommensurable values (civilian lives vs. military objectives). It is unclear whether AI systems can make proportionality judgments, or whether the concept of “judgment” is even applicable to algorithmic outputs.
Precaution. Parties must take feasible precautions to minimize civilian harm. In the context of AI targeting, this includes verifying target identification, assessing collateral damage, and considering alternative means of achieving the military objective.
Accountability. IHL assigns individual criminal responsibility for violations. When AI systems contribute to targeting decisions, the chain of responsibility becomes diffuse. The commander who authorized the use of the AI system, the operator who approved a specific targeting recommendation, the engineer who designed the system, and the company that trained the model all bear some moral responsibility. But legal responsibility under IHL attaches to individuals, and distributing it across a human-AI decision chain is uncharted legal territory.
The International Committee of the Red Cross (ICRC) has called for new international rules on autonomous weapons, arguing that existing IHL is necessary but insufficient. The ICRC’s position is that autonomous weapons that cannot comply with IHL should be prohibited, and that human control over the use of force must be meaningful, not nominal.
The Arms Race Dynamics
Military AI is subject to arms race dynamics that are even more dangerous than those governing nuclear weapons, because the barriers to entry are lower, the pace of development is faster, and the norms governing use are weaker.
Speed pressure. Each major military power believes it must develop AI capabilities as fast as possible to avoid falling behind adversaries. This pressure works against thorough testing, safety evaluation, and ethical review. The Replicator initiative’s 18-24 month timeline explicitly prioritizes speed over deliberation.
Proliferation. Unlike nuclear weapons, which require rare materials and massive industrial infrastructure, military AI can be developed by any nation or non-state actor with access to commercial AI technology. The knowledge base is largely open. The hardware is commercially available (at least at below-frontier levels). The barrier between civilian and military AI capability is thin.
Normative vacuum. Nuclear weapons developed within a pre-existing normative framework that, while imperfect, established taboos against their use. Military AI is developing in a normative vacuum. There is no equivalent of the nuclear taboo for autonomous weapons. There is no equivalent of deterrence theory that creates stable equilibria. There is no equivalent of arms control verification that enables trust-building.
First-mover advantage. In nuclear weapons, first-mover advantage is limited by the reality of mutually assured destruction. In military AI, first-mover advantage may be decisive: the military that deploys effective autonomous systems first may achieve operational advantages that slower adversaries cannot overcome. This creates incentives to deploy before testing is complete.
What INHUMAIN.AI Demands
Military AI is the domain where the consequences of inadequate governance are most severe and most irreversible. INHUMAIN.AI calls for:
Transparency. Nations deploying AI in military operations should disclose the role of AI in targeting decisions, the level of human oversight involved, and the testing and evaluation procedures applied to military AI systems. Classification is not an acceptable excuse for zero accountability.
International regulation. The CCW process has failed. A new diplomatic initiative, potentially modeled on the Ottawa Treaty (landmines) or the Convention on Cluster Munitions, should pursue a binding international agreement on autonomous weapons with the participation of civil society, not just the governments that deploy these weapons.
Nuclear AI prohibition. The integration of AI into nuclear command and control should be addressed as a matter of existential urgency. The US and Russia should begin bilateral discussions specifically addressing AI in nuclear systems, with the goal of establishing mutual constraints on automation in nuclear decision-making.
Corporate accountability. Companies that build military AI systems should be subject to transparency requirements, independent safety audits, and legal liability for the consequences of their systems’ decisions. The current model — where companies build targeting AI under classification and face no public accountability for its effects — is incompatible with democratic governance.
The military applications of AI are the sharpest edge of inhuman intelligence. They are where the consequences of getting AI wrong are measured not in stock prices or benchmark scores but in human lives. Someone needs to watch. That is what INHUMAIN.AI does.
For the broader geopolitical context, see: AI Geopolitics: Who Controls Inhuman Intelligence Controls the Century.
For analysis of AI safety in all domains, see: The Complete Guide to AI Safety.