Abstract:Whether or not a country is at war, or experiencing escalating or deescalating levels of conflict, has massive ramifications on a country's national and foreign policy. Given a country's history of conflict, or lack thereof, future predictions about the war-status of a country are valuable information. In this paper, we present the use of conformal prediction on temporally-dependent data to obtain prediction sets of possible future conflict state-sequences. More specifically, we compare the results of conformal prediction to a likelihood-based prediction strategy when the data are assumed to come from a discrete-state Markov process. A point-prediction may not supply sufficient information because the penalty for a wrong prediction is extreme, and so we consider a machine learning alternative that gives valid uncertainty quantification and is robust to model misspecification. In the data analysis, we present real forecasts of conflict dynamics across multiple countries. Lastly, we comment on the possible limitations of existing approaches for applying conformal prediction to Markovian data, where the exchangeability assumption is violated.
Abstract:Forecasting of armed conflicts is an important area of research that has the potential to save lives and prevent suffering. However, most existing forecasting models provide only point predictions without any individual-level uncertainty estimates. In this paper, we introduce a novel extension to conformal prediction algorithm which we call bin-conditional conformal prediction. This method allows users to obtain individual-level prediction intervals for any arbitrary prediction model while maintaining a specific level of coverage across user-defined ranges of values. We apply the bin-conditional conformal prediction algorithm to forecast fatalities from armed conflict. Our results demonstrate that the method provides well-calibrated uncertainty estimates for the predicted number of fatalities. Compared to standard conformal prediction, the bin-conditional method outperforms offers improved calibration of coverage rates across different values of the outcome, but at the cost of wider prediction intervals.