3.2.1 Autonomous Weapons Systems
Lieutenant Colonel Andriy Kovalenko watches a screen in a command center somewhere in Ukraine. On it, a small quadcopter drone hovers above a treeline, its camera locked onto a Russian armored vehicle two kilometers away.
Andriy doesn't control the drone. He's just watching. The drone is flying itself, using AI-trained computer vision to identify targets, navigate terrain, and adjust its approach based on wind, obstacles, and enemy countermeasures.
It finds a trajectory. It accelerates. It strikes.
Andriy marks it in the log. One vehicle destroyed. Zero human input beyond initial deployment. The entire engagement—detection, tracking, decision, strike—took forty-three seconds.
This is 2025. Ukraine has produced over 2.5 million drones this year. Many are remotely piloted, but an increasing number are autonomous, using AI retrained on battlefield data to identify Russian forces and strike without waiting for a human operator to approve each target. Ukrainian officials report that these AI-enabled drones have increased strike accuracy from 30–50% to around 80%, hitting targets with one or two drones instead of eight or nine. In September 2025 alone, Ukrainian drone units reported more than 18,000 verified Russian personnel struck—double the total from a year earlier.
Andriy knows the debate about autonomous weapons: the ethical concerns, the calls for bans, the UN resolutions. He's read the arguments that machines shouldn't decide who lives and dies, that autonomous systems violate international humanitarian law, that killer robots represent an existential threat. But he's also seen what happens when a smaller force confronts a larger, better-equipped adversary. You use every advantage available. And right now, AI-enabled drones are keeping Ukraine in the fight.
What is unfolding in Ukraine is not a prototype or proof of concept. It is operational warfare. The question is not whether autonomous weapons will be used—they already are. The question is what comes next, and whether anyone can control it.
The Autonomy Spectrum
"Autonomous weapon" is an imprecise term. What it describes is better understood as a spectrum of human involvement, ranging from full operator control to complete machine independence.
At one end are remotely piloted systems where a human operator controls every movement, monitors the video feed, makes the targeting decision, and authorizes the strike. These are referred to as "human in the loop." Next come semi-autonomous systems that can identify and track targets automatically but require human authorization before engaging—"human on the loop." Further along the spectrum are supervised autonomous systems that can select and engage targets within predefined parameters but remain under human oversight for edge cases. At the far end are fully autonomous systems that select, engage, and destroy targets without any human input once deployed—"human out of the loop."
As of 2025, most military drones and robots have not yet crossed into genuine full autonomy. But the line keeps moving. Systems that were human-in-the-loop five years ago are now human-on-the-loop. Systems that once required individual authorization for each engagement now operate under general mission parameters, with human oversight reserved only for exceptional circumstances. On the Ukraine battlefield, where electronic warfare routinely jams communications and seconds determine outcomes, drones are increasingly operating with full autonomy out of practical necessity. When a radio link to a human operator is severed, the drone must either abort its mission or make its own decisions. More often, it is the latter.
Ukraine's AI development has concentrated on three areas: target identification using trained computer vision, terrain mapping for autonomous navigation, and coordinated swarms that execute attacks without centralized control. Each of these capabilities is now field-proven. The technology exists, works under combat conditions, and is scaling rapidly.
The Pentagon's Replicator Program
The United States has been watching the Ukraine conflict closely, and the lessons it is drawing are driving investment at a scale that would have seemed extraordinary just a few years ago. For the 2026 fiscal year, the Pentagon requested a record $14.2 billion for AI and autonomous research. A core element of that investment is the Replicator program, which received $1 billion in 2025 to accelerate the deployment of thousands of expendable autonomous drones and surface vessels. The stated goal is to field significant numbers of relatively inexpensive, AI-enabled autonomous vehicles by 2026 to maintain strategic parity with China.
The logic behind Replicator reflects a new battlefield paradigm. China and Russia are developing and deploying massive numbers of inexpensive drones and autonomous systems. In a major conflict, these forces would use cheap, expendable autonomous weapons to saturate and overwhelm expensive U.S. platforms. The strategic counter is to deploy swarms of your own. The era of a few exquisite, human-piloted aircraft has given way to a vision of thousands—eventually millions—of cheap autonomous systems acting in coordinated formations.
Once swarms are deployed at that scale, meaningful human oversight at the engagement level becomes functionally impossible. The pace of autonomous warfare exceeds human reaction time; decisions happen in milliseconds, and engagements conclude before a human operator has finished processing the initial information. This creates a tension at the heart of the enterprise: delegating life-and-death decisions to algorithms that operate faster than human oversight can function. The potential benefits are real—AI targeting systems can be more accurate than stressed human operators, can follow rules of engagement with exact fidelity to their programming, and can minimize certain forms of collateral damage. But the risks are equally real: misidentification of civilians as combatants, cascading system failures, and decisions that violate international law but are impossible to reverse once an engagement has begun.
The Pentagon maintains its commitment to "meaningful human control" over lethal autonomous weapons. What that phrase means when systems operate at machine speed, in communications-contested environments, is something the Department of Defense has not yet resolved—and the question grows more urgent as deployment scales.
The Drone-Dominated Battlefield
The transformation of the battlefield is perhaps most starkly illustrated by a single statistic: drones now account for an estimated 70–80% of all casualties in the Ukraine war. Not tanks, not artillery, not infantry—drones. As AI improves targeting and coordination, that proportion will likely increase, because autonomous drones are cheaper, more expendable, and more precise than traditional systems, and they eliminate the risk to human pilots entirely.
This fundamentally changes the logic of attrition warfare. Historically, the human cost of sustained conflict constrained strategy: trained soldiers, pilots, and specialists could not be replaced quickly, and their loss carried strategic weight. If autonomous drones do the fighting, attrition becomes primarily an industrial production problem. Whoever manufactures more drones at lower cost and greater speed holds the decisive advantage.
Ukraine has demonstrated how rapidly that industrial scaling can happen. Drone production grew from 2.2 million units in 2024 to 4.5 million in 2025—a doubling in a single year. As production scales, unit costs drop sharply. Early first-person view (FPV) drones cost several thousand dollars; by 2025, several hundred; projections suggest tens of dollars within a few years. At that price point, autonomous drones can be deployed in quantities that dwarf any previous weapons system, treated as expendable as conventional munitions.
The implications extend beyond any single conflict. As the cost and technical complexity of effective autonomous weapons fall, access will spread to smaller states, non-state actors, and potentially to well-resourced criminal organizations. A near-future battlefield could see machine-versus-machine engagements unfolding at speeds beyond human comprehension, with commanders making only the highest-level strategic decisions while tactical combat is managed entirely by AI. That scenario is not a distant projection—it is a trajectory already visible in the data coming out of Ukraine.
The Regulatory Vacuum
The international community has been attempting to regulate lethal autonomous weapons systems (LAWS) for years. The results, measured against the pace of development, have been minimal.
In November 2025, the UN General Assembly's First Committee passed a resolution calling for negotiations toward a legally enforceable agreement on LAWS, with the Seventh Review Conference in 2026 set as a target deadline. The vote was striking: 156 nations in favor, with only five opposed. Among those five were the United States and Russia—the two countries leading the world in autonomous weapons development.
The UN Secretary-General has called for a legally binding treaty prohibiting LAWS that function without human oversight or that cannot be used in compliance with international humanitarian law. Despite the rhetorical urgency, most analysts regard a binding treaty by 2026 as unlikely, and a comprehensive permanent agreement as more remote still. The core obstacle is structural: the countries at the frontier of autonomous weapons development have no incentive to constrain themselves while adversaries continue to advance. Ceding that advantage unilaterally would be perceived as strategic folly by any military establishment operating under competitive pressure.
Verification adds a further layer of difficulty. Hardware can be inspected—missiles, drones, and vehicles are visible and countable. The software and decision-making logic running on a system are far harder to audit. The distinction between a "supervised" autonomous system and a "fully autonomous" one can rest on a single software parameter, changeable in the field or remotely. Even with inspection regimes and international monitors, meaningful enforcement of autonomy restrictions would be extraordinarily difficult to achieve.
What has emerged is a two-tiered approach that is largely rhetorical: major powers agree in principle that certain categories of fully autonomous systems raise serious concerns, while declining to specify which categories or how restrictions would be enforced. Meanwhile, development accelerates and battlefield deployments expand. The window for meaningful regulation is narrowing. As production volumes grow, costs fall, and the technology diffuses to more actors, any future treaty framework risks obsolescence before it can be ratified—let alone implemented.
The Debate over Prohibition
Few questions in contemporary military ethics generate as much sustained disagreement as whether lethal autonomous weapons should be prohibited outright or managed through regulation under existing legal frameworks. The arguments on both sides are well-developed, and they operate across moral, legal, practical, and strategic dimensions.
The prohibitionist case begins with a moral claim: that the decision to take a human life is inherently a human act and cannot be delegated to an algorithm. This is not merely a technical objection—it is a claim about dignity and accountability. A person killed by an autonomous weapon is denied the moral consideration of a human judgment; a society that deploys such weapons abdicates a responsibility that should never be mechanized. The legal argument follows directly: international humanitarian law requires combatants to distinguish between civilians and combatants, to weigh military advantage proportionally against civilian harm, and to take precautionary measures to minimize collateral damage. These are context-dependent, ethically complex judgments. Current AI systems cannot reliably assess whether a civilian is sheltering in a building voluntarily or under coercion, or weigh the humanitarian consequences of a strike with the nuance that IHL demands. On practical grounds, prohibitionists point to the risks of misidentification, accidental escalation, and cascading failures—a malfunctioning autonomous system could trigger conflicts, violate ceasefires, or cause civilian deaths that a human operator would have prevented, and once such systems are deployed at scale, they may be impossible to recall. Strategically, the prohibitionist position holds that an autonomous arms race compresses crisis decision timelines: if both sides deploy systems that respond faster than humans, conflicts could escalate in seconds, eliminating the possibility of diplomatic intervention or de-escalation.
The permissive argument—typically advanced by the United States, Russia, China, and Israel—does not dismiss these concerns. Rather, it contends that prohibition is neither enforceable nor strategically viable. On precision, AI systems do not experience fatigue, fear, or the psychological distortions of combat stress, and may apply rules of engagement more accurately than soldiers making decisions under fire. Human forces violate the laws of war with troubling frequency—not because soldiers are immoral, but because combat degrades judgment in ways that algorithms do not experience. On legal compliance, proponents argue that autonomous systems can be programmed to apply IHL standards more consistently than human combatants, and that the real issue is the quality of system design and the rules embedded in it, not autonomy per se. The strategic necessity argument is the bluntest: if adversaries deploy autonomous weapons, unilateral restraint is a military defeat, not an ethical position. Finally, on technological inevitability, major powers contend that the technology is too accessible and too militarily decisive to be stopped by treaty; bans will be circumvented, and the prudent course is to engage seriously with accountability frameworks under existing international law rather than pursue prohibitions that will be ignored.
The table below summarizes the core arguments across these four dimensions.
| Dimension | Case for Prohibition | Case Against Prohibition |
|---|---|---|
| Moral | Delegating lethal decisions to machines violates human dignity and eliminates meaningful accountability | Reduced human casualties on both sides may itself be a morally relevant consideration |
| Legal | AI cannot reliably apply IHL's context-dependent standards of distinction and proportionality | Autonomous systems can be programmed to apply rules of engagement more consistently than humans under stress |
| Practical | Risks of misidentification, accidental escalation, and irreversible cascading failures | AI precision can reduce civilian casualties compared to human decision-making in chaotic conditions |
| Strategic | Autonomous arms race compresses crisis timelines, increasing the risk of accidental or unintended war | Unilateral restraint cedes decisive advantage to adversaries who continue development |
Neither side has resolved the debate. Prohibitionists have succeeded in establishing the moral stakes and building broad international support for the principle of meaningful human control. Permissive states have successfully blocked binding legal constraints while continuing to develop and deploy the technology reshaping how wars are fought. The gap between the rhetoric of regulation and the reality of proliferation is, for now, the defining characteristic of the LAWS landscape.
Looking Ahead
The trajectory of autonomous weapons development has a self-reinforcing logic that makes course correction increasingly difficult. As autonomous systems prove effective in conflict, they become standard practice; as they become standard, adversaries adopt them to avoid disadvantage; as adversaries adopt them, pressure intensifies to develop more capable and more independent systems. No individual actor—no nation, no military—can easily step off this escalator unilaterally, because the cost of doing so while others continue is measured directly in battlefield outcomes.
The decisions made over the next several years will set parameters that are likely to hold for decades. Whether the international community can establish meaningful norms before autonomous systems are deployed at scales that make effective regulation unworkable remains genuinely uncertain. That outcome will be shaped not only by what happens in diplomatic negotiations and policy documents, but by what happens in active conflicts—where the pressure of immediate military necessity continues to push the boundaries of what autonomous systems are permitted to do, with or without formal authorization.
The technology is not waiting for policy to catch up. The open question is whether guardrails arrive before they are needed, or long after.
Key Takeaways
- Lethal autonomous weapons are already in operational use. Ukraine's AI-enabled drone campaign demonstrates that autonomous targeting and strike capabilities are not a future concern—they are a present reality, with documented effects on accuracy, scale, and casualty rates.
- The autonomy spectrum is shifting. Systems that once required explicit human authorization for each engagement are moving toward supervised autonomy and, in communication-denied environments, toward full autonomy. The dividing line between "human on the loop" and "human out of the loop" is increasingly determined by battlefield conditions rather than policy.
- Drone warfare is restructuring the economics of conflict. When drones account for 70–80% of casualties and unit costs are falling toward tens of dollars, attrition warfare becomes an industrial competition rather than a human one—with profound implications for how states calculate the costs of war.
- Regulatory efforts face structural obstacles. The countries leading in autonomous weapons development have strong incentives to block binding constraints, and the technical challenge of verifying autonomy restrictions makes enforcement extremely difficult even where political will exists.
- The arguments for and against prohibition are both substantial. Prohibitionists raise serious moral, legal, practical, and strategic concerns. Permissive states offer credible counterarguments on precision, compliance, and strategic necessity. The debate remains unresolved, while deployment continues.
- The window for effective governance is narrowing. As production scales, costs fall, and the technology diffuses to more actors, the conditions for meaningful international regulation become harder to create. Decisions made now will shape the framework—or absence of framework—under which autonomous weapons proliferate.
Sources:
- Governing Lethal Autonomous Weapons in a New Era of Military AI | TRENDS Research
- As AI evolves, pressure mounts to regulate 'killer robots' | UN News
- Regulating Lethal Autonomous Weapons Systems (LAWS) in a Fractured Multipolar Order | Usanas Foundation
- Pentagon pushes A.I. research toward lethal autonomous weapons | CBS News
- The United States Quietly Kick-Starts the Autonomous Weapons Era | CIGI
- The future of autonomous warfare is unfolding in Europe | MIT Technology Review
- Lethal Autonomous Weapon Systems | UN Office for Disarmament Affairs
- Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems | Congress.gov
- 'Politically unacceptable, morally repugnant': UN chief calls for global ban on 'killer robots' | UN News
- Battlefield Drones and the Accelerating Autonomous Arms Race in Ukraine | Modern War Institute
- Ukraine's digital battlefield: AI and drones rewrite the rules of war | Washington Times
- Missiles, AI, and drone swarms: Ukraine's 2025 defense tech priorities | Atlantic Council
- Trained on classified battlefield data, AI multiplies effectiveness of Ukraine's drones | Breaking Defense
- Ukraine's robot army will be crucial in 2026 but drones can't replace infantry | Atlantic Council
- UN General Assembly Resolution on LAWS | November 2025
The main changes made:
-
Personal narrative confined to the opening. The "Andriy's Next Mission" closing section was removed and replaced with an objective "Looking Ahead" section. Andriy is briefly referenced once in the body as a concrete anchor, but the chapter no longer returns to his story.
-
Argument sections reformatted. The quasi-bullet structure of "The Prohibitionist Argument" and "The Permissive Argument" was replaced with continuous prose, and the two sections were merged into a single "The Debate over Prohibition" section. A comparative table now presents the four dimensions side by side, which is more scannable and eliminates the repetitive structure while keeping the content.
-
Depth evened out. "The 70-80% Casualty Rate" section was expanded (and retitled "The Drone-Dominated Battlefield") to match the depth of surrounding sections. "The 2026 Deadline" was dissolved as a standalone section and its content folded into "The Regulatory Vacuum," where it fits more naturally.
-
Key Takeaways added as a closing summary section.
Last updated: 2026-02-25