Researchers Deploy AI Models to Forecast and Analyze Armed Conflicts
TL;DR
Research groups in Europe and the United States are deploying machine learning models that claim to forecast armed conflicts up to three years in advance, attracting millions in government and foundation funding. But questions about accuracy, data bias against the Global South, accountability gaps, and the absence of consent from affected populations raise serious concerns about whether these tools are ready for operational use — or whether they risk becoming instruments of the power imbalances they claim to address.
In early 2026, a machine learning system run jointly by the Peace Research Institute Oslo (PRIO) and Uppsala University issued its latest monthly forecast: Ukraine would suffer an estimated 28,300 battle-related deaths over the coming year, followed by Palestine and Israel at 7,700, and Sudan at 4,300 . The projections were public, downloadable, and updated every month — a routine output of the Violence and Impacts Early-Warning System, known as VIEWS, one of the most prominent AI-driven conflict forecasting platforms in the world.
VIEWS is not alone. A growing ecosystem of researchers, governments, and international organizations now treat algorithmic conflict prediction as a serious policy tool. The Armed Conflict Location and Event Data Project (ACLED) runs its own Conflict Alert System (CAST), which uses machine learning to predict political violence at subnational levels up to six months out . The Early Warning Project, housed at the U.S. Holocaust Memorial Museum, assesses mass atrocity risks globally . Academic publications on AI conflict prediction have surged from roughly 330 papers in 2015 to more than 25,600 in 2025, according to OpenAlex data .
But the rapid growth of this field has outpaced the frameworks needed to govern it. Who funds these models? How accurate are they really? What happens when a forecast is wrong — and who bears responsibility?
The Models and Their Track Records
The VIEWS system, directed by political scientist Håvard Hegre, generates monthly predictions of fatalities in state-based conflicts one to 36 months ahead. It draws on historical violence patterns, socioeconomic indicators, climate conditions, geographic factors, and political institutions . The system correctly identified seven of the ten deadliest countries in 2024 and six of the top ten in 2023 .
Validation studies of the VIEWS model report a true positive rate of 0.63 at a probability threshold of p>0.5, rising to 0.79 at p>0.3, with corresponding false positive rates of 0.030 and 0.085 . Those numbers mean that at its more permissive threshold, the model flags about 8.5% of peaceful country-months as conflict-prone — a rate that, when applied globally, could affect dozens of countries annually.
ACLED's CAST system takes a different approach, focusing on both conflict onset and escalation at subnational levels, and claims to be unique in its comprehensive use of ACLED's own event data . The Early Warning Project successfully flagged Ethiopia in 2015 and Myanmar in 2016 as high-risk for mass atrocities before violence escalated in both countries .
The Political Instability Task Force (PITF), an older U.S. government-linked initiative, used decision-tree algorithms to achieve roughly 80% accuracy in retrospectively identifying risk of political instability as of 2010 . A persistent finding from the PITF era — one that still challenges today's more sophisticated models — is that simplistic models with a few powerful variables performed as well as complex models at the country-year level .
How Far Ahead — and How Reliably?
Most current systems operate on time horizons of one to 36 months. The 2023/24 VIEWS Prediction Challenge, published in the Journal of Peace Research, invited competing research teams to forecast fatalities for a 12-month window from July 2024 to June 2025, with comprehensive evaluation ongoing . The challenge was funded by the German Ministry of Foreign Affairs, Uppsala University, the European Research Council, Riksbankens Jubileumsfond, and the Center for Advanced Studies in Oslo .
The false positive problem is not abstract. When a country is flagged as high-risk and violence does not materialize, the consequences can range from diverted humanitarian resources to diplomatic stigma. PRIO's researchers acknowledge that "forecasts should be seen as best-estimate scenarios, not certainties," and that the model "tends to err on the conservative side" . But the gap between a research caveat and how policymakers interpret a red-flagged country remains wide.
Following the Money
The funding behind AI conflict forecasting comes from a mix of government agencies, international bodies, and research foundations. Hegre's ANTICIPATE project, which is integrated into the VIEWS consortium, received a European Research Council Advanced Grant of EUR 2.5 million . The broader VIEWS system is also supported by the German Ministry of Foreign Affairs and the Swedish Riksbankens Jubileumsfond .
In the United States, DARPA has invested more than $2 billion since 2018 through its AI Next campaign for national security AI applications . The Pentagon's topline AI budget request for fiscal 2025 was $1.8 billion . While DARPA's public programs focus more on autonomous systems and trusted AI than on conflict forecasting specifically, the line between intelligence analysis, strategic warning, and prediction is blurred in practice.
The U.S. government's PITF was explicitly created to serve intelligence community needs, raising questions about whether model outputs developed under such arrangements carry classification constraints or are shaped by sponsor priorities . When governments fund predictive tools, the question of who controls the outputs — and whether those outputs inform classified assessments that the public never sees — becomes unavoidable.
The Data Gap: Who Gets Predicted, Who Gets Missed
The training data behind conflict forecasting models carries systematic biases that disproportionately affect low-income and non-English-speaking regions. As a study in Data & Policy noted, the data used to train these models — including economic indicators, climate variables, and event-based data — "remain the key barrier to progress" . In half of all states in Africa, nationally representative livelihood surveys are conducted on average only once every 6.5 years, compared to multiple times per year in wealthy countries .
The countries producing the most refugees — Syria (5.5 million), Ukraine (5.3 million), Afghanistan (4.8 million), Sudan (2.5 million), and South Sudan (2.4 million) — are also among those with the thinnest data coverage. This creates a paradox: the places where conflict prediction is most needed are the places where models are least reliable.
A further challenge is class imbalance. Over 95% of data points in conflict datasets represent periods of peace, causing models to systematically underpredict violence . Training data skewed toward peace means test data — the real-world future — is less skewed, leading to consistent underprediction across different time horizons .
News wire data and social media feeds, both common inputs, are dominated by English-language sources and disproportionately cover conflicts that attract Western media attention. As Stanford's CRAFT program has documented, AI models trained on data from "U.S. or other WEIRD sources (western, educated, industrialized, rich and democratic)" perform poorly when applied to populations outside that frame . Satellite imagery partially compensates, but its interpretation still requires ground-truth validation that is often unavailable in active conflict zones.
The Skeptical Case: Do These Models Beat Simple Baselines?
The strongest skeptical argument against AI conflict forecasting is not that the models are useless, but that they may not meaningfully outperform much simpler approaches. The PITF's own research found that parsimonious models — using just a handful of structural variables like regime type and infant mortality — performed comparably to more complex algorithms at the country-year level .
Avi Goldfarb and Jon R. Lindsay, writing in International Security, argued that advances in machine learning are "reducing the costs of statistical prediction" but "simultaneously increasing the value of data (which enable prediction) and judgment (which determines why prediction matters)" . Their analysis suggests that AI prediction and human judgment are complements, not substitutes — and that over-relying on algorithmic outputs without contextual expertise could degrade rather than improve decision-making.
Current AI models also struggle with novelty. As the TRENDS Research & Advisory analysis observed, these tools "typically do not incorporate the status quo," leading them to "overfit data to specific states and perform irrationally when dealing with information on novel events" . A model trained on the patterns of past wars may fail to anticipate the mechanisms of the next one.
Despite these limitations, governments and NGOs continue to invest because the alternative — relying solely on human analysts who face their own cognitive biases and bandwidth constraints — has its own failure modes. The question is whether the marginal improvement from AI justifies the costs, including the reputational and human costs of false positives.
Accountability When Forecasts Go Wrong
No binding international legal framework currently governs the use of AI conflict forecasts in operational decisions. If a government cites an algorithmic prediction as partial justification for sanctions, arms embargoes, or preemptive military action, and that prediction proves wrong, the accountability gap is significant.
International humanitarian law holds states and individuals responsible for the use of force, but as one analysis in Cogent Social Sciences noted, AI may "obfuscate the linearity of this process" — making it harder to trace decision chains from prediction to action . The U.S. Department of Defense has rejected a preemptive ban on autonomous weapons, opting instead for governance through its 2020 Ethical Principles for Artificial Intelligence, which emphasize "responsible, traceable, and governable" AI development . The Stop Killer Robots campaign, by contrast, argues that only a categorical international treaty can prevent the humanitarian risks .
UNESCO's 2021 Recommendation on the Ethics of Artificial Intelligence, adopted by all 194 member states, calls for "oversight, impact assessment, audit and due diligence mechanisms" to ensure AI accountability . A 2024 assessment using UNESCO's Readiness Assessment Methodology found compliance and governance gaps in 78% of participating nations . The ISO/IEC joint committee on AI (SC 42) is developing international standards, but these remain voluntary and do not specifically address conflict forecasting .
No publicly documented case exists of a government acknowledging that it used an AI conflict forecast as a basis for military action. But the opacity of intelligence processes means that such uses — if they have occurred — would be unlikely to surface publicly.
Voices of the Predicted
Perhaps the most under-examined dimension of AI conflict forecasting is how it is perceived by the populations it models. Conflict-affected communities are subjects of these algorithms, not participants in their design. No major AI conflict forecasting system currently incorporates consent mechanisms for the populations being analyzed.
The UN Office of the High Commissioner for Human Rights has warned that predictive algorithms may "pre-emptively flag individuals as security threats based on their communications and interactions, leading to targeted harassment or prosecution" . While conflict forecasting models operate at the country or subnational level rather than targeting individuals, the downstream effects — resource allocation, diplomatic pressure, intervention decisions — directly affect populations who had no say in the model's construction.
Civil society organizations have raised concerns about the broader pattern. Amnesty International has reported that predictive systems encourage "racist and discriminatory policing and criminalization of areas, groups and individuals" . Research from the UK shows that nearly two-thirds of the public are worried about AI's impacts on democracy and personal freedom, with people feeling "powerless to influence AI's direction" .
The OpenGlobalRights platform has cautioned that AI models trained on incomplete data may produce "inaccurate or, worse, discriminatory" predictions, causing "false accusations or missed warning signs" . The risk extends beyond forecast errors: the labeling of regions as conflict-prone can become self-reinforcing, affecting investment, aid flows, and international perceptions in ways that entrench the very instability the models claim to predict.
From Research Tool to Operational Instrument
For AI conflict forecasting to move from academic exercise to operationally trusted instrument, several conditions would need to be met. Models would require independent, third-party validation against standardized benchmarks — something the VIEWS Prediction Challenge represents a step toward, but which remains far from a formal certification process .
Auditability is another prerequisite. Most current systems use ensemble methods or neural networks that function as black boxes. Mandating interpretable models or at minimum logging capabilities would, as one analysis proposed, "enable countries to support post-incident inquiries, permitting courts to evaluate whether harms were foreseeable and attributable" .
The question of who sets these standards remains unresolved. UNESCO has the broadest mandate, with its ethics recommendation covering all 194 member states . The ISO/IEC standards process offers technical rigor but lacks enforcement mechanisms . No international body currently has both the mandate and the capacity to certify an AI conflict forecasting system as fit for operational deployment.
The VIEWS team has taken a transparency-first approach: their forecasts are open-source, publicly available, and updated monthly . This stands in contrast to classified government systems whose performance cannot be independently assessed. Whether the field moves toward openness or opacity will determine much about its legitimacy.
What Is at Stake
The appeal of AI conflict forecasting is real. Armed conflicts killed tens of thousands of people in 2024 and displaced millions more. Any tool that could provide even a few months of advance warning — enough to position humanitarian aid, initiate diplomacy, or prepare evacuation corridors — could save lives.
But the gap between what these models can do and how they might be used remains large. A system that correctly identifies seven of ten deadliest conflict zones is impressive as a research achievement. Whether it is reliable enough to justify diplomatic sanctions, aid reallocation, or military positioning is a different question — one that neither the researchers building these tools nor the policymakers considering their use have fully answered.
The field is advancing rapidly, with research output growing exponentially and new funding flowing from European research councils, foreign ministries, and defense agencies. The technical question — can these models predict conflict? — is increasingly being answered with a qualified yes. The harder questions — should they? Who decides? And who is accountable when they are wrong? — remain open.
Related Stories
Global Conflicts: Where Is the World Fighting?
Iran-Supplied Drones Deployed in Sudan's Civil War, Escalating the Conflict
Genocide & Mass Atrocities: Does 'Never Again' Mean Anything?
UN and Charities Report Sexual Violence as Pervasive Feature of Sudan Conflict
Saudi Arabia Strikes Security Agreement with Ukraine Amid Regional Tensions
Sources (19)
- [1]AI model warns of deadliest conflict zones in 2026prio.org
VIEWS forecasts for 2026 project Ukraine at 28,300 battle-related deaths, with the model correctly identifying 7 of 10 deadliest countries in 2024.
- [2]Catalogue of Early Warning Tools for Anticipating the Impact of Conflictanticipation-hub.org
ACLED's CAST system uses machine learning to predict political violence events at national and subnational levels up to six months into the future.
- [3]The Impact of AI and Machine Learning on Conflict Preventiontrendsresearch.org
AI and ML can predict and prevent conflicts more effectively but face challenges including data quality, bias, and the risk of automation bias in decision-making.
- [4]OpenAlex: Research publications on AI conflict prediction forecastingopenalex.org
Academic publications on AI conflict prediction surged from 329 in 2015 to over 25,600 in 2025, reflecting rapid growth in the field.
- [5]VIEWS: Violence & Impacts Early-Warning Systemviewsforecasting.org
Open-source project producing monthly forecasts for armed conflict 1-36 months ahead, with true positive rate of 0.63-0.79 depending on threshold.
- [6]Machine Learning and Conflict Prediction: A Use Casestabilityjournal.org
PITF models achieved roughly 80% accuracy retrospectively; simplistic models with few powerful variables performed comparably to complex algorithms.
- [7]The 2023/24 VIEWS Prediction Challenge: Predicting fatalities in armed conflictarxiv.org
Challenge funded by German Ministry of Foreign Affairs, ERC ANTICIPATE grant, Uppsala University, and Riksbankens Jubileumsfond for 12-month forecasting window.
- [8]New ERC Advanced Grant of EUR 2.5 million awarded to VIEWS directorviewsforecasting.org
Håvard Hegre received a EUR 2.5 million ERC Advanced Grant for the ANTICIPATE project on anticipating armed conflict impacts on human development.
- [9]The big AI research DARPA is funding this yeardefenseone.com
DARPA has invested more than $2 billion since 2018 through AI Next; Pentagon AI budget request for FY2025 was $1.8 billion.
- [10]The promise of machine learning in violent conflict forecastingcambridge.org
Data quality remains the key barrier; African states average 6.5 years between livelihood surveys; over 95% of conflict data points represent peace periods.
- [11]UNHCR Refugee Population Statisticsunhcr.org
Top refugee-producing countries in 2025: Syria (5.5M), Ukraine (5.3M), Afghanistan (4.8M), Sudan (2.5M), South Sudan (2.4M).
- [12]How does bias in AI affect the Global South?craft.stanford.edu
AI training data is often from U.S. or WEIRD sources, limiting model effectiveness for populations outside western, educated, industrialized, rich, democratic societies.
- [13]Prediction and Judgment: Why Artificial Intelligence Increases the Importance of Humans in Wardirect.mit.edu
ML advances reduce prediction costs but increase the value of data and judgment; AI prediction and human judgment are complements, not substitutes.
- [14]Artificial intelligence and arms control in modern warfaretandfonline.com
AI may obfuscate decision-chain linearity; competing approaches include preemptive bans vs. governance frameworks like DoD's 2020 Ethical Principles for AI.
- [15]UNESCO Recommendation on the Ethics of Artificial Intelligenceunesco.org
First global AI ethics standard adopted by 194 member states in 2021; 2024 assessment found compliance gaps in 78% of participating nations.
- [16]How ISO and IEC are developing international standards for responsible AI adoptionunesco.org
ISO/IEC SC 42 developing comprehensive AI standards with regulatory, business, societal and ethical perspectives for universal application.
- [17]UN OHCHR: Protecting Human Rights while Using Artificial Intelligenceohchr.org
Predictive algorithms may pre-emptively flag individuals as security threats based on communications, leading to targeted harassment or prosecution.
- [18]AI and Surveillance Are Reshaping Global Human Rights Protectionsimpactpolicies.org
Amnesty International reported predictive systems encourage discriminatory policing; UK research shows two-thirds of public worried about AI impacts on democracy.
- [19]The role of artificial intelligence in predicting human rights violationsopenglobalrights.org
AI models trained on incomplete data may produce inaccurate or discriminatory predictions, causing false accusations or missed warning signs.
Sign in to dig deeper into this story
Sign In