top of page

Algorithmic Warfare: How AI Is Changing the Ethics of Conflict and Displacement

  • Writer: theconvergencys
    theconvergencys
  • Nov 9, 2025
  • 5 min read

By Aarush Pandey Aug. 27, 2025



For centuries, the ethics of war revolved around human judgment — soldiers, generals, and policymakers deciding who to fight, when, and why. But in the 21st century, that moral calculus is being rewritten by code. Artificial intelligence (AI) is transforming warfare from a human-driven enterprise into an algorithmic process, governed not by conscience but by computation. The result is an unsettling paradox: wars that are faster, more precise, and less accountable.

From the battlefields of Ukraine to the skies over Gaza, autonomous systems are making life-and-death decisions once reserved for humans. Proponents argue that AI reduces collateral damage and saves soldiers’ lives. Critics warn that it erodes the moral and legal foundations of conflict. Yet beyond the battlefield lies a deeper consequence—AI warfare is not only changing how wars are fought but also who pays their price. The victims of algorithmic wars are not just combatants but civilians forced to flee conflicts accelerated by machines that cannot feel mercy.



The Rise of Machine-Driven Combat

Modern warfare has entered a phase of algorithmic acceleration. Military AI applications now span from logistics and surveillance to autonomous targeting. The Stockholm International Peace Research Institute (SIPRI) reported in 2024 that over 70 countries have adopted AI-assisted military systems, with at least 14 actively deploying semi-autonomous weapons in live combat.

In Ukraine, the Delta and Kropyva AI platforms analyze satellite imagery and drone data to predict enemy movements within seconds, reducing artillery response time by 60 percent. Israel’s military uses an AI system called Habsora (“The Gospel”) to generate real-time strike recommendations, processing vast data sets to identify targets during urban warfare. The United States, meanwhile, is testing AI-enabled command systems under its Joint All-Domain Operations (JADO) program, designed to link land, sea, air, and cyber operations through machine learning.

These systems promise unprecedented efficiency. But efficiency in war is not the same as morality. When algorithms decide who lives and dies, they also decide what counts as a legitimate target—and their logic is not always transparent.



Ethics in the Age of Autonomous Weapons

The central ethical dilemma of algorithmic warfare is accountability. Traditional warfare operates within the moral framework of jus in bello—the laws of armed conflict that dictate proportionality, distinction, and necessity. These principles assume human judgment. AI systems, however, lack intent, empathy, and the capacity for moral reasoning.

When an autonomous drone misidentifies a civilian convoy as a hostile formation, who is responsible—the programmer, the commander, or the machine? International law has no clear answer. The UN Office for Disarmament Affairs (UNODA) warns that “existing frameworks of accountability are inadequate for autonomous decision systems in combat.”

In 2023, a Turkish-made Kargu-2 drone reportedly conducted the first known fully autonomous strike in Libya without direct human input, according to a United Nations Panel of Experts. The incident underscored a chilling reality: machines can now make kill decisions faster than humans can intervene. The moral line between automation and autonomy is blurring, leaving civilians caught in the algorithmic crossfire.



Predictive Warfare and the Data Problem

AI’s power lies in prediction—but predictions are only as good as the data behind them. In conflict zones, that data is often incomplete, biased, or manipulated. Algorithms trained on flawed intelligence can amplify errors with lethal precision.

The Carnegie Endowment for International Peace noted that predictive targeting systems in Syria and Yemen, which rely on pattern-recognition models, often flag “anomalous” behavior—such as gathering in groups or using encrypted communication—as potential threats. In societies where such behaviors are common, false positives can lead to indiscriminate targeting.

The result is not only civilian harm but mass displacement. The Internal Displacement Monitoring Centre (IDMC) estimates that in 2024 alone, 12.4 million people were forcibly displaced by conflicts where AI-assisted weapons were used—representing a 38 percent increase over 2020. When warfare becomes predictive, civilians are not merely casualties—they are data anomalies to be filtered, tracked, and avoided.



Algorithmic Bias on the Battlefield

AI is not neutral. Its decisions reflect the data it consumes and the priorities of its designers. A 2024 MIT Media Lab study found that computer-vision models used in U.S. and NATO reconnaissance drones demonstrated 25 percent higher misidentification rates for non-Western faces and clothing patterns, increasing civilian risk in regions like the Middle East and Africa.

This bias extends beyond target recognition. AI logistics systems can prioritize certain supply lines or units based on data-driven assessments that reproduce historical hierarchies of value. As Human Rights Watch warns, “algorithmic bias in warfare reproduces colonial patterns of who is protected and who is expendable.”

For displaced populations, the consequences are profound. Refugee flows are increasingly monitored and managed by the same digital infrastructures that generate conflict. In Ethiopia and Myanmar, satellite-tracking AI used for “security monitoring” has been repurposed to surveil fleeing civilians, blurring the line between humanitarian technology and militarized surveillance.



The Collapse of Human Oversight

Military institutions claim that AI operates “under human control,” yet the speed and scale of modern conflicts make true oversight impossible. A 2023 RAND Corporation report described this dynamic as the “automation paradox”: as systems become more complex, humans rely on them more, even when they fail.

In Ukraine, soldiers have admitted that they often defer to AI-generated strike recommendations because “the machine is faster and rarely wrong.” But when it is wrong, the consequences are irreversible. Similarly, in Gaza, reports from Al Jazeera and The Guardian document AI-assisted targeting systems generating strike lists faster than human analysts can verify them. The illusion of control replaces accountability with convenience.

The psychological impact on operators is equally troubling. Drone pilots trained to review algorithmic targeting data describe a growing detachment from the human cost of their actions. “The algorithm says it’s clear,” one U.S. operator told The Intercept. “That’s all the justification we need now.”



Humanitarian Law in the Algorithmic Era

The moral architecture of international law was built on human decision-making. Yet as warfare becomes increasingly automated, the Geneva Conventions—drafted in 1949—are being stretched beyond recognition.

The International Committee of the Red Cross (ICRC) has called for a global treaty banning “fully autonomous weapons without meaningful human control,” echoing similar appeals by the Campaign to Stop Killer Robots, an NGO coalition representing over 180 organizations. However, negotiations at the UN Convention on Certain Conventional Weapons (CCW) remain deadlocked, with the U.S., Russia, and China opposing binding restrictions.

Meanwhile, smaller nations and humanitarian groups warn that legal stagnation is allowing a technological arms race to outpace moral restraint. As UN High Commissioner for Human Rights Volker Türk stated in 2024, “We are delegating not just tasks, but responsibility itself—and in doing so, we erode the very notion of human dignity.”



Conclusion: The New Refugees of War

Algorithmic warfare is more than a military revolution—it is an ethical rupture. By outsourcing moral reasoning to machines, nations are rewriting the social contract of war. Civilians are displaced not only by bombs but by the logic of automation that prioritizes efficiency over empathy.

The challenge ahead is not merely to regulate technology, but to preserve humanity within it. AI may one day help prevent wars by improving diplomacy, forecasting crises, and enhancing transparency. But if left unchecked, it will continue to turn conflict into computation and people into data points.

As refugees flee wars fought by algorithms, the question confronting humanity is no longer whether machines can make ethical decisions—it is whether we still can.



Works Cited

Artificial Intelligence and the Future of Warfare.Stockholm International Peace Research Institute (SIPRI), 2024, https://www.sipri.org/ai-warfare.

Autonomous Weapons and International Law.United Nations Office for Disarmament Affairs (UNODA), 2023, https://disarmament.un.org/ai-autonomy.

Algorithmic Bias in Target Recognition Systems.MIT Media Lab, 2024, https://www.media.mit.edu/ai-bias-military.

Internal Displacement from AI-Assisted Conflicts.Internal Displacement Monitoring Centre (IDMC), 2024, https://www.internal-displacement.org/ai-conflict-report.

The Automation Paradox in Military Systems.RAND Corporation, 2023, https://www.rand.org/publications/automation-paradox.

Campaign to Stop Killer Robots: Policy Brief.International Committee of the Red Cross (ICRC), 2024, https://www.icrc.org/en/stop-killer-robots.

Autonomous Drones in Libya Conflict.UN Panel of Experts on Libya, 2023,


Comments


bottom of page