Alert button
Picture for Adel Javanmard

Adel Javanmard

Alert button

Anonymous Learning via Look-Alike Clustering: A Precise Analysis of Model Generalization

Oct 09, 2023
Adel Javanmard, Vahab Mirrokni

While personalized recommendations systems have become increasingly popular, ensuring user data protection remains a top concern in the development of these learning systems. A common approach to enhancing privacy involves training models using anonymous data rather than individual data. In this paper, we explore a natural technique called \emph{look-alike clustering}, which involves replacing sensitive features of individuals with the cluster's average values. We provide a precise analysis of how training models using anonymous cluster centers affects their generalization capabilities. We focus on an asymptotic regime where the size of the training set grows in proportion to the features dimension. Our analysis is based on the Convex Gaussian Minimax Theorem (CGMT) and allows us to theoretically understand the role of different model components on the generalization error. In addition, we demonstrate that in certain high-dimensional regimes, training over anonymous cluster centers acts as a regularization and improves generalization error of the trained models. Finally, we corroborate our asymptotic theory with finite-sample numerical experiments where we observe a perfect match when the sample size is only of order of a few hundreds.

* accepted at the Conference on Neural Information Processing Systems (NeurIPS 2023) 
Viaarxiv icon

Learning via Look-Alike Clustering: A Precise Analysis of Model Generalization

Oct 06, 2023
Adel Javanmard, Vahab Mirrokni

While personalized recommendations systems have become increasingly popular, ensuring user data protection remains a paramount concern in the development of these learning systems. A common approach to enhancing privacy involves training models using anonymous data rather than individual data. In this paper, we explore a natural technique called \emph{look-alike clustering}, which involves replacing sensitive features of individuals with the cluster's average values. We provide a precise analysis of how training models using anonymous cluster centers affects their generalization capabilities. We focus on an asymptotic regime where the size of the training set grows in proportion to the features dimension. Our analysis is based on the Convex Gaussian Minimax Theorem (CGMT) and allows us to theoretically understand the role of different model components on the generalization error. In addition, we demonstrate that in certain high-dimensional regimes, training over anonymous cluster centers acts as a regularization and improves generalization error of the trained models. Finally, we corroborate our asymptotic theory with finite-sample numerical experiments where we observe a perfect match when the sample size is only of order of a few hundreds.

* accepted at the Conference on Neural Information Processing Systems (NeurIPS 2023) 
Viaarxiv icon

Causal Inference with Differentially Private (Clustered) Outcomes

Aug 02, 2023
Adel Javanmard, Vahab Mirrokni, Jean Pouget-Abadie

Figure 1 for Causal Inference with Differentially Private (Clustered) Outcomes
Figure 2 for Causal Inference with Differentially Private (Clustered) Outcomes
Figure 3 for Causal Inference with Differentially Private (Clustered) Outcomes
Figure 4 for Causal Inference with Differentially Private (Clustered) Outcomes

Estimating causal effects from randomized experiments is only feasible if participants agree to reveal their potentially sensitive responses. Of the many ways of ensuring privacy, label differential privacy is a widely used measure of an algorithm's privacy guarantee, which might encourage participants to share responses without running the risk of de-anonymization. Many differentially private mechanisms inject noise into the original data-set to achieve this privacy guarantee, which increases the variance of most statistical estimators and makes the precise measurement of causal effects difficult: there exists a fundamental privacy-variance trade-off to performing causal analyses from differentially private data. With the aim of achieving lower variance for stronger privacy guarantees, we suggest a new differential privacy mechanism, "Cluster-DP", which leverages any given cluster structure of the data while still allowing for the estimation of causal effects. We show that, depending on an intuitive measure of cluster quality, we can improve the variance loss while maintaining our privacy guarantees. We compare its performance, theoretically and empirically, to that of its unclustered version and a more extreme uniform-prior version which does not use any of the original response distribution, both of which are special cases of the "Cluster-DP" algorithm.

* 41 pages, 10 figures 
Viaarxiv icon

Measuring Re-identification Risk

Apr 12, 2023
CJ Carey, Travis Dick, Alessandro Epasto, Adel Javanmard, Josh Karlin, Shankar Kumar, Andres Munoz Medina, Vahab Mirrokni, Gabriel Henrique Nunes, Sergei Vassilvitskii, Peilin Zhong

Figure 1 for Measuring Re-identification Risk
Figure 2 for Measuring Re-identification Risk
Figure 3 for Measuring Re-identification Risk
Figure 4 for Measuring Re-identification Risk

Compact user representations (such as embeddings) form the backbone of personalization services. In this work, we present a new theoretical framework to measure re-identification risk in such user representations. Our framework, based on hypothesis testing, formally bounds the probability that an attacker may be able to obtain the identity of a user from their representation. As an application, we show how our framework is general enough to model important real-world applications such as the Chrome's Topics API for interest-based advertising. We complement our theoretical bounds by showing provably good attack algorithms for re-identification that we use to estimate the re-identification risk in the Topics API. We believe this work provides a rigorous and interpretable notion of re-identification risk and a framework to measure it that can be used to inform real-world applications.

Viaarxiv icon

Structured Dynamic Pricing: Optimal Regret in a Global Shrinkage Model

Mar 28, 2023
Rashmi Ranjan Bhuyan, Adel Javanmard, Sungchul Kim, Gourab Mukherjee, Ryan A. Rossi, Tong Yu, Handong Zhao

Figure 1 for Structured Dynamic Pricing: Optimal Regret in a Global Shrinkage Model
Figure 2 for Structured Dynamic Pricing: Optimal Regret in a Global Shrinkage Model
Figure 3 for Structured Dynamic Pricing: Optimal Regret in a Global Shrinkage Model
Figure 4 for Structured Dynamic Pricing: Optimal Regret in a Global Shrinkage Model

We consider dynamic pricing strategies in a streamed longitudinal data set-up where the objective is to maximize, over time, the cumulative profit across a large number of customer segments. We consider a dynamic probit model with the consumers' preferences as well as price sensitivity varying over time. Building on the well-known finding that consumers sharing similar characteristics act in similar ways, we consider a global shrinkage structure, which assumes that the consumers' preferences across the different segments can be well approximated by a spatial autoregressive (SAR) model. In such a streamed longitudinal set-up, we measure the performance of a dynamic pricing policy via regret, which is the expected revenue loss compared to a clairvoyant that knows the sequence of model parameters in advance. We propose a pricing policy based on penalized stochastic gradient descent (PSGD) and explicitly characterize its regret as functions of time, the temporal variability in the model parameters as well as the strength of the auto-correlation network structure spanning the varied customer segments. Our regret analysis results not only demonstrate asymptotic optimality of the proposed policy but also show that for policy planning it is essential to incorporate available structural information as policies based on unshrunken models are highly sub-optimal in the aforementioned set-up.

* 34 pages, 5 figures 
Viaarxiv icon

Learning Rate Schedules in the Presence of Distribution Shift

Mar 27, 2023
Matthew Fahrbach, Adel Javanmard, Vahab Mirrokni, Pratik Worah

Figure 1 for Learning Rate Schedules in the Presence of Distribution Shift
Figure 2 for Learning Rate Schedules in the Presence of Distribution Shift
Figure 3 for Learning Rate Schedules in the Presence of Distribution Shift
Figure 4 for Learning Rate Schedules in the Presence of Distribution Shift

We design learning rate schedules that minimize regret for SGD-based online learning in the presence of a changing data distribution. We fully characterize the optimal learning rate schedule for online linear regression via a novel analysis with stochastic differential equations. For general convex loss functions, we propose new learning rate schedules that are robust to distribution shift, and we give upper and lower bounds for the regret that only differ by constants. For non-convex loss functions, we define a notion of regret based on the gradient norm of the estimated models and propose a learning schedule that minimizes an upper bound on the total expected regret. Intuitively, one expects changing loss landscapes to require more exploration, and we confirm that optimal learning rate schedules typically increase in the presence of distribution shift. Finally, we provide experiments for high-dimensional regression models and neural networks to illustrate these learning rate schedules and their cumulative regret.

* 33 pages, 6 figures 
Viaarxiv icon

Prediction Sets for High-Dimensional Mixture of Experts Models

Oct 30, 2022
Adel Javanmard, Simeng Shao, Jacob Bien

Figure 1 for Prediction Sets for High-Dimensional Mixture of Experts Models
Figure 2 for Prediction Sets for High-Dimensional Mixture of Experts Models
Figure 3 for Prediction Sets for High-Dimensional Mixture of Experts Models
Figure 4 for Prediction Sets for High-Dimensional Mixture of Experts Models

Large datasets make it possible to build predictive models that can capture heterogenous relationships between the response variable and features. The mixture of high-dimensional linear experts model posits that observations come from a mixture of high-dimensional linear regression models, where the mixture weights are themselves feature-dependent. In this paper, we show how to construct valid prediction sets for an $\ell_1$-penalized mixture of experts model in the high-dimensional setting. We make use of a debiasing procedure to account for the bias induced by the penalization and propose a novel strategy for combining intervals to form a prediction set with coverage guarantees in the mixture setting. Synthetic examples and an application to the prediction of critical temperatures of superconducting materials show our method to have reliable practical performance.

* 36 pages, 6 figures, 2 tables 
Viaarxiv icon

GRASP: A Goodness-of-Fit Test for Classification Learning

Sep 05, 2022
Adel Javanmard, Mohammad Mehrabi

Figure 1 for GRASP: A Goodness-of-Fit Test for Classification Learning
Figure 2 for GRASP: A Goodness-of-Fit Test for Classification Learning
Figure 3 for GRASP: A Goodness-of-Fit Test for Classification Learning
Figure 4 for GRASP: A Goodness-of-Fit Test for Classification Learning

Performance of classifiers is often measured in terms of average accuracy on test data. Despite being a standard measure, average accuracy fails in characterizing the fit of the model to the underlying conditional law of labels given the features vector ($Y|X$), e.g. due to model misspecification, over fitting, and high-dimensionality. In this paper, we consider the fundamental problem of assessing the goodness-of-fit for a general binary classifier. Our framework does not make any parametric assumption on the conditional law $Y|X$, and treats that as a black box oracle model which can be accessed only through queries. We formulate the goodness-of-fit assessment problem as a tolerance hypothesis testing of the form \[ H_0: \mathbb{E}\Big[D_f\Big({\sf Bern}(\eta(X))\|{\sf Bern}(\hat{\eta}(X))\Big)\Big]\leq \tau\,, \] where $D_f$ represents an $f$-divergence function, and $\eta(x)$, $\hat{\eta}(x)$ respectively denote the true and an estimate likelihood for a feature vector $x$ admitting a positive label. We propose a novel test, called \grasp for testing $H_0$, which works in finite sample settings, no matter the features (distribution-free). We also propose model-X \grasp designed for model-X settings where the joint distribution of the features vector is known. Model-X \grasp uses this distributional information to achieve better power. We evaluate the performance of our tests through extensive numerical experiments.

* 42 pages, 4 tables and 3 figures 
Viaarxiv icon

The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression

Jan 13, 2022
Hamed Hassani, Adel Javanmard

Figure 1 for The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression
Figure 2 for The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression
Figure 3 for The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression
Figure 4 for The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression

Successful deep learning models often involve training neural network architectures that contain more parameters than the number of training samples. Such overparametrized models have been extensively studied in recent years, and the virtues of overparametrization have been established from both the statistical perspective, via the double-descent phenomenon, and the computational perspective via the structural properties of the optimization landscape. Despite the remarkable success of deep learning architectures in the overparametrized regime, it is also well known that these models are highly vulnerable to small adversarial perturbations in their inputs. Even when adversarially trained, their performance on perturbed inputs (robust generalization) is considerably worse than their best attainable performance on benign inputs (standard generalization). It is thus imperative to understand how overparametrization fundamentally affects robustness. In this paper, we will provide a precise characterization of the role of overparametrization on robustness by focusing on random features regression models (two-layer neural networks with random first layer weights). We consider a regime where the sample size, the input dimension and the number of parameters grow in proportion to each other, and derive an asymptotically exact formula for the robust generalization error when the model is adversarially trained. Our developed theory reveals the nontrivial effect of overparametrization on robustness and indicates that for adversarially trained random features models, high overparametrization can hurt robust generalization.

* 87 pages, 13 pdf figures 
Viaarxiv icon

Adversarial robustness for latent models: Revisiting the robust-standard accuracies tradeoff

Oct 22, 2021
Adel Javanmard, Mohammad Mehrabi

Figure 1 for Adversarial robustness for latent models: Revisiting the robust-standard accuracies tradeoff
Figure 2 for Adversarial robustness for latent models: Revisiting the robust-standard accuracies tradeoff
Figure 3 for Adversarial robustness for latent models: Revisiting the robust-standard accuracies tradeoff
Figure 4 for Adversarial robustness for latent models: Revisiting the robust-standard accuracies tradeoff

Over the past few years, several adversarial training methods have been proposed to improve the robustness of machine learning models against adversarial perturbations in the input. Despite remarkable progress in this regard, adversarial training is often observed to drop the standard test accuracy. This phenomenon has intrigued the research community to investigate the potential tradeoff between standard and robust accuracy as two performance measures. In this paper, we revisit this tradeoff for latent models and argue that this tradeoff is mitigated when the data enjoys a low-dimensional structure. In particular, we consider binary classification under two data generative models, namely Gaussian mixture model and generalized linear model, where the feature data lie on a low-dimensional manifold. We show that as the manifold dimension to the ambient dimension decreases, one can obtain models that are nearly optimal with respect to both, the standard accuracy and the robust accuracy measures.

Viaarxiv icon