Alert button
Picture for Rajiv Khanna

Rajiv Khanna

Alert button

A Precise Characterization of SGD Stability Using Loss Surface Geometry

Jan 22, 2024
Gregory Dexter, Borja Ocejo, Sathiya Keerthi, Aman Gupta, Ayan Acharya, Rajiv Khanna

Viaarxiv icon

On Memorization and Privacy risks of Sharpness Aware Minimization

Sep 30, 2023
Young In Kim, Pratiksha Agrawal, Johannes O. Royset, Rajiv Khanna

Viaarxiv icon

Generalization Guarantees via Algorithm-dependent Rademacher Complexity

Jul 04, 2023
Sarah Sachs, Tim van Erven, Liam Hodgkinson, Rajiv Khanna, Umut Simsekli

Viaarxiv icon

Feature Space Sketching for Logistic Regression

Mar 24, 2023
Gregory Dexter, Rajiv Khanna, Jawad Raheel, Petros Drineas

Viaarxiv icon

Fast Feature Selection with Fairness Constraints

Feb 28, 2022
Francesco Quinzan, Rajiv Khanna, Moshik Hershcovitch, Sarel Cohen, Daniel G. Waddington, Tobias Friedrich, Michael W. Mahoney

Figure 1 for Fast Feature Selection with Fairness Constraints
Figure 2 for Fast Feature Selection with Fairness Constraints
Figure 3 for Fast Feature Selection with Fairness Constraints
Viaarxiv icon

Generalization Properties of Stochastic Optimizers via Trajectory Analysis

Aug 02, 2021
Liam Hodgkinson, Umut Şimşekli, Rajiv Khanna, Michael W. Mahoney

Figure 1 for Generalization Properties of Stochastic Optimizers via Trajectory Analysis
Figure 2 for Generalization Properties of Stochastic Optimizers via Trajectory Analysis
Figure 3 for Generalization Properties of Stochastic Optimizers via Trajectory Analysis
Figure 4 for Generalization Properties of Stochastic Optimizers via Trajectory Analysis
Viaarxiv icon

LocalNewton: Reducing Communication Bottleneck for Distributed Learning

May 16, 2021
Vipul Gupta, Avishek Ghosh, Michal Derezinski, Rajiv Khanna, Kannan Ramchandran, Michael Mahoney

Figure 1 for LocalNewton: Reducing Communication Bottleneck for Distributed Learning
Figure 2 for LocalNewton: Reducing Communication Bottleneck for Distributed Learning
Figure 3 for LocalNewton: Reducing Communication Bottleneck for Distributed Learning
Figure 4 for LocalNewton: Reducing Communication Bottleneck for Distributed Learning
Viaarxiv icon

Adversarially-Trained Deep Nets Transfer Better

Jul 11, 2020
Francisco Utrera, Evan Kravitz, N. Benjamin Erichson, Rajiv Khanna, Michael W. Mahoney

Figure 1 for Adversarially-Trained Deep Nets Transfer Better
Figure 2 for Adversarially-Trained Deep Nets Transfer Better
Figure 3 for Adversarially-Trained Deep Nets Transfer Better
Figure 4 for Adversarially-Trained Deep Nets Transfer Better
Viaarxiv icon

Boundary thickness and robustness in learning models

Jul 09, 2020
Yaoqing Yang, Rajiv Khanna, Yaodong Yu, Amir Gholami, Kurt Keutzer, Joseph E. Gonzalez, Kannan Ramchandran, Michael W. Mahoney

Figure 1 for Boundary thickness and robustness in learning models
Figure 2 for Boundary thickness and robustness in learning models
Figure 3 for Boundary thickness and robustness in learning models
Figure 4 for Boundary thickness and robustness in learning models
Viaarxiv icon

Bayesian Coresets: An Optimization Perspective

Jul 01, 2020
Jacky Y. Zhang, Rajiv Khanna, Anastasios Kyrillidis, Oluwasanmi Koyejo

Figure 1 for Bayesian Coresets: An Optimization Perspective
Figure 2 for Bayesian Coresets: An Optimization Perspective
Figure 3 for Bayesian Coresets: An Optimization Perspective
Figure 4 for Bayesian Coresets: An Optimization Perspective
Viaarxiv icon