top of page

Refugees of the Algorithm: How AI Resettlement Systems Are Rewriting Human Rights

  • Writer: theconvergencys
    theconvergencys
  • Nov 9, 2025
  • 5 min read

By Haru Takeda Aug. 31, 2025



In the 20th century, the fate of refugees was often determined by diplomats, bureaucrats, and political negotiations. In the 21st, it is increasingly being decided by algorithms. Across Europe, North America, and parts of Asia, governments are adopting artificial intelligence (AI) systems to manage everything from refugee resettlement to asylum screening. Proponents argue that AI improves efficiency and fairness in a system burdened by millions of displaced people. Yet, beneath the promise of automation lies a profound moral and political dilemma: what happens when the right to refuge is mediated by code rather than compassion?

As the number of forcibly displaced people surpasses 120 million globally, according to the UN Refugee Agency (UNHCR, 2024), AI-driven governance is becoming the backbone of humanitarian policy. But as predictive analytics and machine learning reshape how refugees are categorized, prioritized, and relocated, they also raise new questions about bias, accountability, and human dignity in an age of digital humanitarianism.



Automation of Compassion

The global refugee system is overwhelmed. Conflicts in Ukraine, Sudan, Gaza, and Myanmar have generated some of the largest displacement crises since World War II. With traditional bureaucracies struggling to manage data, governments are turning to AI to streamline resettlement logistics.

In 2023, the International Organization for Migration (IOM) launched an AI-based platform called PRIMES (Personalized Refugee Integration and Migration Evaluation System), designed to analyze biometric, educational, and psychological data to match refugees with host communities most likely to support successful integration. Similarly, Canada and Finland have experimented with machine-learning models that predict where asylum seekers are most likely to find employment and social stability.

Supporters hail these systems as breakthroughs. A 2024 OECD report found that AI-assisted placement increased employment rates among refugees in pilot programs by 27 percent compared to manual assignment. By optimizing resettlement outcomes, governments can theoretically balance humanitarian obligations with economic efficiency. But as with any system trained on historical data, algorithms risk inheriting—and amplifying—the biases of the past.



When Bias Becomes Policy

AI does not make decisions in a vacuum. It learns from existing patterns—patterns often shaped by systemic inequality. In migration governance, those patterns reflect decades of discrimination based on nationality, gender, religion, and perceived “assimilability.”

A Carnegie Endowment for International Peace analysis in 2023 revealed that AI tools used in European asylum screening disproportionately flagged applicants from Muslim-majority countries as “high-risk” due to proxy variables like language, travel route, or social-media activity. Similarly, the University of Toronto’s Citizen Lab found that algorithmic visa systems in the United Kingdom and Australia used risk-profiling methods that effectively penalized applicants from Africa and the Middle East.

These digital biases are not trivial—they carry life-or-death consequences. In one 2022 case documented by Human Rights Watch, an automated risk assessment used by Dutch immigration authorities erroneously classified 1,500 asylum seekers as “fraud-prone,” delaying their applications for months. The government later admitted that the model’s inputs—such as the frequency of document corrections—were skewed by linguistic and cultural differences rather than actual fraud indicators.

In effect, the same algorithmic logic that governs social media feeds and credit scores is now determining who deserves safety.



Predicting Integration, Defining Worth

The growing use of predictive analytics in refugee placement introduces a new moral question: should the value of a refugee’s life be determined by their “integration potential”?

AI-driven models often prioritize measurable socioeconomic outcomes—employment likelihood, education level, language acquisition—over unquantifiable factors such as trauma, family ties, or cultural resilience. In the U.S., the Refugee Processing Center (RPC) and its AI-assisted “Matching Algorithm for Resettlement” rank asylum seekers based on how well they fit host communities. While this may improve efficiency, it also implies a utilitarian hierarchy: refugees are sorted not by need, but by predicted productivity.

Critics argue that this logic mirrors the marketization of migration—a transformation of humanitarian policy into human capital management. As migration scholar Tendayi Bloom observes, “AI does not just predict success; it defines what success means.” The risk is that the right to refuge becomes conditional on economic performance rather than humanitarian necessity.



Surveillance in the Name of Safety

Beyond selection and placement, AI is reshaping how refugees are monitored after arrival. Facial-recognition cameras, biometric databases, and predictive policing are now common in refugee camps and border zones. The European Union’s Entry/Exit System (EES), launched in 2024, uses AI to track non-EU nationals’ movements through biometric scans, while Kenya’s Digital Refugee ID System links individuals’ fingerprints and iris data to all government services.

Authorities justify these systems as security measures against trafficking and fraud. Yet, they often function as instruments of surveillance and control. In Jordan’s Zaatari camp, AI-assisted CCTV networks analyze crowd behavior to predict “potential unrest,” a practice criticized by Amnesty International for criminalizing collective movement. In Greece, drone-based analytics now monitor refugee crossings in real time—transforming humanitarian spaces into digital border zones.

The irony is profound: those fleeing persecution and authoritarian surveillance are increasingly governed by algorithms in exile. The digital walls built to “protect” refugees may, in practice, confine them to perpetual scrutiny.



Algorithmic Accountability: A Legal Vacuum

While the ethical dilemmas of AI in refugee management are vast, the legal safeguards remain almost nonexistent. International refugee law—anchored in the 1951 Refugee Convention—predates the digital age and contains no provisions for algorithmic decision-making.

The European Court of Human Rights (ECHR) has ruled that automated asylum decisions must be “subject to meaningful human oversight,” yet what constitutes “meaningful” remains undefined. Meanwhile, AI governance frameworks like the EU AI Act and the UNESCO Recommendation on AI Ethics classify refugee-management systems as “high-risk,” but enforcement is inconsistent across jurisdictions.

The lack of transparency compounds the problem. Many governments classify algorithmic decision tools as proprietary, shielding them from public scrutiny. As legal scholar Elettra Bietti notes, “Opacity is the new form of state secrecy—one that hides not intentions, but instructions.” When decisions about human lives are delegated to private software vendors, accountability diffuses into the cloud.



Humanitarian Tech or Digital Displacement?

Despite its risks, algorithmic resettlement is not inherently dystopian. When ethically designed, AI can enhance fairness, reduce bottlenecks, and support overwhelmed asylum systems. The challenge is to ensure that technology augments empathy rather than replaces it.

Pilot programs in Norway and New Zealand offer cautious optimism. Both countries have implemented “human-in-the-loop” AI systems that use machine learning to support—not substitute—human judgment. In these cases, algorithms provide recommendations, but final decisions remain with trained caseworkers, ensuring oversight and moral reasoning.

Similarly, the UNHCR Innovation Service has launched the Refugee Data Responsibility Framework (RDRF), a global initiative to ensure consent, transparency, and data minimization in refugee-related AI systems. While progress is incremental, these models suggest a path forward: one where technology serves humanitarian values rather than erodes them.



Conclusion

The automation of asylum marks a turning point in human rights history. For the first time, the humanitarian promise of protection is being filtered through predictive code. If managed ethically, AI could revolutionize refugee governance, making it faster, fairer, and more adaptive. If not, it risks transforming one of humanity’s noblest commitments into an experiment in digital disposability.

The true question is not whether algorithms can make better decisions—it is whether they should. Refuge is not a dataset to be optimized; it is a moral contract to be honored. The future of humanitarianism depends on remembering that, even in an age of artificial intelligence, compassion cannot be automated.



Works Cited

Global Trends: Forced Displacement 2024.United Nations High Commissioner for Refugees (UNHCR), 2024, https://www.unhcr.org/globaltrends2024.

AI and Migration Governance.Organisation for Economic Co-operation and Development (OECD), 2024, https://www.oecd.org/migration/ai-governance-2024.

Automating Asylum: Risks of AI in Migration Management.Carnegie Endowment for International Peace, 2023, https://carnegieendowment.org/automating-asylum.

Risk, Rights, and Refuge: Algorithmic Bias in Immigration Systems.Citizen Lab, University of Toronto, 2024, https://citizenlab.ca/reports/refugee-ai-bias.

Refugee Data Responsibility Framework.UNHCR Innovation Service, 2024, https://innovation.unhcr.org/refugee-data-responsibility.

Surveillance and Displacement.Amnesty International, 2024,


Comments


bottom of page