Alert button
Picture for Moulinath Banerjee

Moulinath Banerjee

Alert button

Conditional independence testing under model misspecification

Jul 05, 2023
Felipe Maia Polo, Yuekai Sun, Moulinath Banerjee

Conditional independence (CI) testing is fundamental and challenging in modern statistics and machine learning. Many modern methods for CI testing rely on powerful supervised learning methods to learn regression functions or Bayes predictors as an intermediate step. Although the methods are guaranteed to control Type-I error when the supervised learning methods accurately estimate the regression functions or Bayes predictors, their behavior is less understood when they fail due to model misspecification. In a broader sense, model misspecification can arise even when universal approximators (e.g., deep neural nets) are employed. Then, we study the performance of regression-based CI tests under model misspecification. Namely, we propose new approximations or upper bounds for the testing errors of three regression-based tests that depend on misspecification errors. Moreover, we introduce the Rao-Blackwellized Predictor Test (RBPT), a novel regression-based CI test robust against model misspecification. Finally, we conduct experiments with artificial and real data, showcasing the usefulness of our theory and methods.

Viaarxiv icon

Understanding new tasks through the lens of training data via exponential tilting

May 26, 2022
Subha Maity, Mikhail Yurochkin, Moulinath Banerjee, Yuekai Sun

Figure 1 for Understanding new tasks through the lens of training data via exponential tilting
Figure 2 for Understanding new tasks through the lens of training data via exponential tilting
Figure 3 for Understanding new tasks through the lens of training data via exponential tilting
Figure 4 for Understanding new tasks through the lens of training data via exponential tilting

Deploying machine learning models to new tasks is a major challenge despite the large size of the modern training datasets. However, it is conceivable that the training data can be reweighted to be more representative of the new (target) task. We consider the problem of reweighing the training samples to gain insights into the distribution of the target task. Specifically, we formulate a distribution shift model based on the exponential tilt assumption and learn train data importance weights minimizing the KL divergence between labeled train and unlabeled target datasets. The learned train data weights can then be used for downstream tasks such as target performance evaluation, fine-tuning, and model selection. We demonstrate the efficacy of our method on Waterbirds and Breeds benchmarks.

Viaarxiv icon

Predictor-corrector algorithms for stochastic optimization under gradual distribution shift

May 26, 2022
Subha Maity, Debarghya Mukherjee, Moulinath Banerjee, Yuekai Sun

Figure 1 for Predictor-corrector algorithms for stochastic optimization under gradual distribution shift
Figure 2 for Predictor-corrector algorithms for stochastic optimization under gradual distribution shift

Time-varying stochastic optimization problems frequently arise in machine learning practice (e.g. gradual domain shift, object tracking, strategic classification). Although most problems are solved in discrete time, the underlying process is often continuous in nature. We exploit this underlying continuity by developing predictor-corrector algorithms for time-varying stochastic optimizations. We provide error bounds for the iterates, both in presence of pure and noisy access to the queries from the relevant derivatives of the loss function. Furthermore, we show (theoretically and empirically in several examples) that our method outperforms non-predictor corrector methods that do not exploit the underlying continuous process.

Viaarxiv icon

Two Simple Ways to Learn Individual Fairness Metrics from Data

Jun 19, 2020
Debarghya Mukherjee, Mikhail Yurochkin, Moulinath Banerjee, Yuekai Sun

Figure 1 for Two Simple Ways to Learn Individual Fairness Metrics from Data
Figure 2 for Two Simple Ways to Learn Individual Fairness Metrics from Data
Figure 3 for Two Simple Ways to Learn Individual Fairness Metrics from Data

Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness. Despite its benefits, it depends on a task specific fair metric that encodes our intuition of what is fair and unfair for the ML task at hand, and the lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness. In this paper, we present two simple ways to learn fair metrics from a variety of data types. We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases. We also provide theoretical guarantees on the statistical performance of both approaches.

* To appear in ICML 2020 
Viaarxiv icon

Minimax optimal approaches to the label shift problem

Apr 04, 2020
Subha Maity, Yuekai Sun, Moulinath Banerjee

Figure 1 for Minimax optimal approaches to the label shift problem
Figure 2 for Minimax optimal approaches to the label shift problem

We study minimax rates of convergence in the label shift problem. In addition to the usual setting in which the learner only has access to unlabeled examples from the target domain, we also consider the setting in which a small number of labeled examples from the target domain are available to the learner. Our study reveals a difference in the difficulty of the label shift problem in the two settings. We attribute this difference to the availability of data from the target domain to estimate the class conditional distributions in the latter setting. We also show that a distributional matching approach is minimax rate-optimal in the former setting.

Viaarxiv icon

Communication-Efficient Integrative Regression in High-Dimensions

Dec 26, 2019
Subha Maity, Yuekai Sun, Moulinath Banerjee

Figure 1 for Communication-Efficient Integrative Regression in High-Dimensions
Figure 2 for Communication-Efficient Integrative Regression in High-Dimensions

We consider the task of meta-analysis in high-dimensional settings in which the data sources we wish to integrate are similar but non-identical. To borrow strength across such heterogeneous data sources, we introduce a global parameter that addresses several identification issues. We also propose a one-shot estimator of the global parameter that preserves the anonymity of the data sources and converges at a rate that depends on the size of the combined dataset. Finally, we demonstrate the benefits of our approach on a large-scale drug treatment dataset involving several different cancer cell lines.

Viaarxiv icon

Change Point Estimation in a Dynamic Stochastic Block Model

Dec 07, 2018
Monika Bhattacharjee, Moulinath Banerjee, George Michailidis

Figure 1 for Change Point Estimation in a Dynamic Stochastic Block Model
Figure 2 for Change Point Estimation in a Dynamic Stochastic Block Model
Figure 3 for Change Point Estimation in a Dynamic Stochastic Block Model
Figure 4 for Change Point Estimation in a Dynamic Stochastic Block Model

We consider the problem of estimating the location of a single change point in a dynamic stochastic block model. We propose two methods of estimating the change point, together with the model parameters. The first employs a least squares criterion function and takes into consideration the full structure of the stochastic block model and is evaluated at each point in time. Hence, as an intermediate step, it requires estimating the community structure based on a clustering algorithm at every time point. The second method comprises of the following two steps: in the first one, a least squares function is used and evaluated at each time point, but ignores the community structures and just considers a random graph generating mechanism exhibiting a change point. Once the change point is identified, in the second step, all network data before and after it are used together with a clustering algorithm to obtain the corresponding community structures and subsequently estimate the generating stochastic block model parameters. A comparison between these two methods is illustrated. Further, for both methods under their respective identifiability and certain additional regularity conditions, we establish rates of convergence and derive the asymptotic distributions of the change point estimators. The results are illustrated on synthetic data.

* Please see the .pdf file for an extended abstract 
Viaarxiv icon