Alert button
Picture for Mateo Rojas-Carulla

Mateo Rojas-Carulla

Alert button

Recommendations on test datasets for evaluating AI solutions in pathology

Apr 21, 2022
André Homeyer, Christian Geißler, Lars Ole Schwen, Falk Zakrzewski, Theodore Evans, Klaus Strohmenger, Max Westphal, Roman David Bülow, Michaela Kargl, Aray Karjauv, Isidre Munné-Bertran, Carl Orge Retzlaff, Adrià Romero-López, Tomasz Sołtysiński, Markus Plass, Rita Carvalho, Peter Steinbach, Yu-Chia Lan, Nassim Bouteldja, David Haber, Mateo Rojas-Carulla, Alireza Vafaei Sadr, Matthias Kraft, Daniel Krüger, Rutger Fick, Tobias Lang, Peter Boor, Heimo Müller, Peter Hufnagl, Norman Zerbe

Figure 1 for Recommendations on test datasets for evaluating AI solutions in pathology
Figure 2 for Recommendations on test datasets for evaluating AI solutions in pathology
Figure 3 for Recommendations on test datasets for evaluating AI solutions in pathology
Figure 4 for Recommendations on test datasets for evaluating AI solutions in pathology

Artificial intelligence (AI) solutions that automatically extract information from digital histology images have shown great promise for improving pathological diagnosis. Prior to routine use, it is important to evaluate their predictive performance and obtain regulatory approval. This assessment requires appropriate test datasets. However, compiling such datasets is challenging and specific recommendations are missing. A committee of various stakeholders, including commercial AI developers, pathologists, and researchers, discussed key aspects and conducted extensive literature reviews on test datasets in pathology. Here, we summarize the results and derive general recommendations for the collection of test datasets. We address several questions: Which and how many images are needed? How to deal with low-prevalence subsets? How can potential bias be detected? How should datasets be reported? What are the regulatory requirements in different countries? The recommendations are intended to help AI developers demonstrate the utility of their products and to help regulatory agencies and end users verify reported performance measures. Further research is needed to formulate criteria for sufficiently representative test datasets so that AI solutions can operate with less user intervention and better support diagnostic workflows in the future.

Viaarxiv icon

GeNet: Deep Representations for Metagenomics

Jan 30, 2019
Mateo Rojas-Carulla, Ilya Tolstikhin, Guillermo Luque, Nicholas Youngblut, Ruth Ley, Bernhard Schölkopf

Figure 1 for GeNet: Deep Representations for Metagenomics
Figure 2 for GeNet: Deep Representations for Metagenomics
Figure 3 for GeNet: Deep Representations for Metagenomics
Figure 4 for GeNet: Deep Representations for Metagenomics

We introduce GeNet, a method for shotgun metagenomic classification from raw DNA sequences that exploits the known hierarchical structure between labels for training. We provide a comparison with state-of-the-art methods Kraken and Centrifuge on datasets obtained from several sequencing technologies, in which dataset shift occurs. We show that GeNet obtains competitive precision and good recall, with orders of magnitude less memory requirements. Moreover, we show that a linear model trained on top of representations learned by GeNet achieves recall comparable to state-of-the-art methods on the aforementioned datasets, and achieves over 90% accuracy in a challenging pathogen detection problem. This provides evidence of the usefulness of the representations learned by GeNet for downstream biological tasks.

Viaarxiv icon

Invariant Models for Causal Transfer Learning

Sep 24, 2018
Mateo Rojas-Carulla, Bernhard Schölkopf, Richard Turner, Jonas Peters

Figure 1 for Invariant Models for Causal Transfer Learning
Figure 2 for Invariant Models for Causal Transfer Learning
Figure 3 for Invariant Models for Causal Transfer Learning
Figure 4 for Invariant Models for Causal Transfer Learning

Methods of transfer learning try to combine knowledge from several related tasks (or domains) to improve performance on a test task. Inspired by causal methodology, we relax the usual covariate shift assumption and assume that it holds true for a subset of predictor variables: the conditional distribution of the target variable given this subset of predictors is invariant over all tasks. We show how this assumption can be motivated from ideas in the field of causality. We focus on the problem of Domain Generalization, in which no examples from the test task are observed. We prove that in an adversarial setting using this subset for prediction is optimal in Domain Generalization; we further provide examples, in which the tasks are sufficiently diverse and the estimator therefore outperforms pooling the data, even on average. If examples from the test task are available, we also provide a method to transfer knowledge from the training tasks and exploit all available features for prediction. However, we provide no guarantees for this method. We introduce a practical method which allows for automatic inference of the above subset and provide corresponding code. We present results on synthetic data sets and a gene deletion data set.

* Journal of Machine Learning Research. 19 (2018)  
Viaarxiv icon

Learning Independent Causal Mechanisms

Sep 08, 2018
Giambattista Parascandolo, Niki Kilbertus, Mateo Rojas-Carulla, Bernhard Schölkopf

Figure 1 for Learning Independent Causal Mechanisms
Figure 2 for Learning Independent Causal Mechanisms
Figure 3 for Learning Independent Causal Mechanisms
Figure 4 for Learning Independent Causal Mechanisms

Statistical learning relies upon data sampled from a distribution, and we usually do not care what actually generated it in the first place. From the point of view of causal modeling, the structure of each distribution is induced by physical mechanisms that give rise to dependences between observables. Mechanisms, however, can be meaningful autonomous modules of generative models that make sense beyond a particular entailed data distribution, lending themselves to transfer between problems. We develop an algorithm to recover a set of independent (inverse) mechanisms from a set of transformed data points. The approach is unsupervised and based on a set of experts that compete for data generated by the mechanisms, driving specialization. We analyze the proposed method in a series of experiments on image data. Each expert learns to map a subset of the transformed data back to a reference distribution. The learned mechanisms generalize to novel domains. We discuss implications for transfer learning and links to recent trends in generative modeling.

* Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4036-4044, 2018  
* ICML 2018 
Viaarxiv icon

Avoiding Discrimination through Causal Reasoning

Jan 21, 2018
Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, Bernhard Schölkopf

Figure 1 for Avoiding Discrimination through Causal Reasoning
Figure 2 for Avoiding Discrimination through Causal Reasoning
Figure 3 for Avoiding Discrimination through Causal Reasoning
Figure 4 for Avoiding Discrimination through Causal Reasoning

Recent work on fairness in machine learning has focused on various statistical discrimination criteria and how they trade off. Most of these criteria are observational: They depend only on the joint distribution of predictor, protected attribute, features, and outcome. While convenient to work with, observational criteria have severe inherent limitations that prevent them from resolving matters of fairness conclusively. Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning. This viewpoint shifts attention from "What is the right fairness criterion?" to "What do we want to assume about the causal data generating process?" Through the lens of causality, we make several contributions. First, we crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion. Second, our approach exposes previously ignored subtleties and why they are fundamental to the problem. Finally, we put forward natural causal non-discrimination criteria and develop algorithms that satisfy them.

* Advances in Neural Information Processing Systems 30, 2017, p. 656--666  
* Advances in Neural Information Processing Systems 30, 2017 http://papers.nips.cc/paper/6668-avoiding-discrimination-through-causal-reasoning 
Viaarxiv icon

Discriminative k-shot learning using probabilistic models

Dec 09, 2017
Matthias Bauer, Mateo Rojas-Carulla, Jakub Bartłomiej Świątkowski, Bernhard Schölkopf, Richard E. Turner

Figure 1 for Discriminative k-shot learning using probabilistic models
Figure 2 for Discriminative k-shot learning using probabilistic models
Figure 3 for Discriminative k-shot learning using probabilistic models
Figure 4 for Discriminative k-shot learning using probabilistic models

This paper introduces a probabilistic framework for k-shot image classification. The goal is to generalise from an initial large-scale classification task to a separate task comprising new classes and small numbers of examples. The new approach not only leverages the feature-based representation learned by a neural network from the initial task (representational transfer), but also information about the classes (concept transfer). The concept information is encapsulated in a probabilistic model for the final layer weights of the neural network which acts as a prior for probabilistic k-shot learning. We show that even a simple probabilistic model achieves state-of-the-art on a standard k-shot learning dataset by a large margin. Moreover, it is able to accurately model uncertainty, leading to well calibrated classifiers, and is easily extensible and flexible, unlike many recent approaches to k-shot learning.

Viaarxiv icon

Causal Discovery Using Proxy Variables

Feb 23, 2017
Mateo Rojas-Carulla, Marco Baroni, David Lopez-Paz

Figure 1 for Causal Discovery Using Proxy Variables
Figure 2 for Causal Discovery Using Proxy Variables
Figure 3 for Causal Discovery Using Proxy Variables
Figure 4 for Causal Discovery Using Proxy Variables

Discovering causal relations is fundamental to reasoning and intelligence. In particular, observational causal discovery algorithms estimate the cause-effect relation between two random entities $X$ and $Y$, given $n$ samples from $P(X,Y)$. In this paper, we develop a framework to estimate the cause-effect relation between two static entities $x$ and $y$: for instance, an art masterpiece $x$ and its fraudulent copy $y$. To this end, we introduce the notion of proxy variables, which allow the construction of a pair of random entities $(A,B)$ from the pair of static entities $(x,y)$. Then, estimating the cause-effect relation between $A$ and $B$ using an observational causal discovery algorithm leads to an estimation of the cause-effect relation between $x$ and $y$. For example, our framework detects the causal relation between unprocessed photographs and their modifications, and orders in time a set of shuffled frames from a video. As our main case study, we introduce a human-elicited dataset of 10,000 pairs of casually-linked pairs of words from natural language. Our methods discover 75% of these causal relations. Finally, we discuss the role of proxy variables in machine learning, as a general tool to incorporate static knowledge into prediction tasks.

Viaarxiv icon