Alert button
Picture for Alex 'Sandy' Pentland

Alex 'Sandy' Pentland

Alert button

A Study of Compositional Generalization in Neural Models

Jul 08, 2020
Tim Klinger, Dhaval Adjodah, Vincent Marois, Josh Joseph, Matthew Riemer, Alex 'Sandy' Pentland, Murray Campbell

Figure 1 for A Study of Compositional Generalization in Neural Models
Figure 2 for A Study of Compositional Generalization in Neural Models
Figure 3 for A Study of Compositional Generalization in Neural Models
Figure 4 for A Study of Compositional Generalization in Neural Models

Compositional and relational learning is a hallmark of human intelligence, but one which presents challenges for neural models. One difficulty in the development of such models is the lack of benchmarks with clear compositional and relational task structure on which to systematically evaluate them. In this paper, we introduce an environment called ConceptWorld, which enables the generation of images from compositional and relational concepts, defined using a logical domain specific language. We use it to generate images for a variety of compositional structures: 2x2 squares, pentominoes, sequences, scenes involving these objects, and other more complex concepts. We perform experiments to test the ability of standard neural architectures to generalize on relations with compositional arguments as the compositional depth of those arguments increases and under substitution. We compare standard neural networks such as MLP, CNN and ResNet, as well as state-of-the-art relational networks including WReN and PrediNet in a multi-class image classification setting. For simple problems, all models generalize well to close concepts but struggle with longer compositional chains. For more complex tests involving substitutivity, all models struggle, even with short chains. In highlighting these difficulties and providing an environment for further experimentation, we hope to encourage the development of models which are able to generalize effectively in compositional, relational domains.

* 28 pages 
Viaarxiv icon

Modeling the Temporal Nature of Human Behavior for Demographics Prediction

Nov 15, 2017
Bjarke Felbo, Pål Sundsøy, Alex 'Sandy' Pentland, Sune Lehmann, Yves-Alexandre de Montjoye

Figure 1 for Modeling the Temporal Nature of Human Behavior for Demographics Prediction
Figure 2 for Modeling the Temporal Nature of Human Behavior for Demographics Prediction
Figure 3 for Modeling the Temporal Nature of Human Behavior for Demographics Prediction
Figure 4 for Modeling the Temporal Nature of Human Behavior for Demographics Prediction

Mobile phone metadata is increasingly used for humanitarian purposes in developing countries as traditional data is scarce. Basic demographic information is however often absent from mobile phone datasets, limiting the operational impact of the datasets. For these reasons, there has been a growing interest in predicting demographic information from mobile phone metadata. Previous work focused on creating increasingly advanced features to be modeled with standard machine learning algorithms. We here instead model the raw mobile phone metadata directly using deep learning, exploiting the temporal nature of the patterns in the data. From high-level assumptions we design a data representation and convolutional network architecture for modeling patterns within a week. We then examine three strategies for aggregating patterns across weeks and show that our method reaches state-of-the-art accuracy on both age and gender prediction using only the temporal modality in mobile metadata. We finally validate our method on low activity users and evaluate the modeling assumptions.

* Accepted at ECML 2017. A previous version of this paper was titled 'Using Deep Learning to Predict Demographics from Mobile Phone Metadata' and was accepted at the ICLR 2016 workshop 
Viaarxiv icon