Alert button
Picture for Samuel P. Fraiberger

Samuel P. Fraiberger

Alert button

Fine-grained prediction of food insecurity using news streams

Nov 17, 2021
Ananth Balashankar, Lakshminarayanan Subramanian, Samuel P. Fraiberger

Figure 1 for Fine-grained prediction of food insecurity using news streams
Figure 2 for Fine-grained prediction of food insecurity using news streams
Figure 3 for Fine-grained prediction of food insecurity using news streams

Anticipating the outbreak of a food crisis is crucial to efficiently allocate emergency relief and reduce human suffering. However, existing food insecurity early warning systems rely on risk measures that are often delayed, outdated, or incomplete. Here, we leverage recent advances in deep learning to extract high-frequency precursors to food crises from the text of a large corpus of news articles about fragile states published between 1980 and 2020. Our text features are causally grounded, interpretable, validated by existing data, and allow us to predict 32% more food crises than existing models up to three months ahead of time at the district level across 15 fragile states. These results could have profound implications on how humanitarian aid gets allocated and open new avenues for machine learning to improve decision making in data-scarce environments.

Viaarxiv icon

Enhancing Transparency and Control when Drawing Data-Driven Inferences about Individuals

Jun 26, 2016
Daizhuo Chen, Samuel P. Fraiberger, Robert Moakler, Foster Provost

Figure 1 for Enhancing Transparency and Control when Drawing Data-Driven Inferences about Individuals
Figure 2 for Enhancing Transparency and Control when Drawing Data-Driven Inferences about Individuals
Figure 3 for Enhancing Transparency and Control when Drawing Data-Driven Inferences about Individuals
Figure 4 for Enhancing Transparency and Control when Drawing Data-Driven Inferences about Individuals

Recent studies have shown that information disclosed on social network sites (such as Facebook) can be used to predict personal characteristics with surprisingly high accuracy. In this paper we examine a method to give online users transparency into why certain inferences are made about them by statistical models, and control to inhibit those inferences by hiding ("cloaking") certain personal information from inference. We use this method to examine whether such transparency and control would be a reasonable goal by assessing how difficult it would be for users to actually inhibit inferences. Applying the method to data from a large collection of real users on Facebook, we show that a user must cloak only a small portion of her Facebook Likes in order to inhibit inferences about their personal characteristics. However, we also show that in response a firm could change its modeling of users to make cloaking more difficult.

* presented at 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, NY 
Viaarxiv icon