Picture for Sanjay Kariyappa

Sanjay Kariyappa

Progressive Inference: Explaining Decoder-Only Sequence Classification Models Using Intermediate Predictions

Jun 03, 2024
Viaarxiv icon

Privacy-Preserving Algorithmic Recourse

Add code
Nov 23, 2023
Viaarxiv icon

SHAP@k:Efficient and Probably Approximately Correct (PAC) Identification of Top-k Features

Jul 10, 2023
Figure 1 for SHAP@k:Efficient and Probably Approximately Correct (PAC) Identification of Top-k Features
Figure 2 for SHAP@k:Efficient and Probably Approximately Correct (PAC) Identification of Top-k Features
Figure 3 for SHAP@k:Efficient and Probably Approximately Correct (PAC) Identification of Top-k Features
Figure 4 for SHAP@k:Efficient and Probably Approximately Correct (PAC) Identification of Top-k Features
Viaarxiv icon

Information Flow Control in Machine Learning through Modular Model Architecture

Jun 05, 2023
Figure 1 for Information Flow Control in Machine Learning through Modular Model Architecture
Figure 2 for Information Flow Control in Machine Learning through Modular Model Architecture
Figure 3 for Information Flow Control in Machine Learning through Modular Model Architecture
Figure 4 for Information Flow Control in Machine Learning through Modular Model Architecture
Viaarxiv icon

Bounding the Invertibility of Privacy-preserving Instance Encoding using Fisher Information

Add code
May 06, 2023
Figure 1 for Bounding the Invertibility of Privacy-preserving Instance Encoding using Fisher Information
Figure 2 for Bounding the Invertibility of Privacy-preserving Instance Encoding using Fisher Information
Figure 3 for Bounding the Invertibility of Privacy-preserving Instance Encoding using Fisher Information
Figure 4 for Bounding the Invertibility of Privacy-preserving Instance Encoding using Fisher Information
Viaarxiv icon

Measuring and Controlling Split Layer Privacy Leakage Using Fisher Information

Sep 21, 2022
Figure 1 for Measuring and Controlling Split Layer Privacy Leakage Using Fisher Information
Figure 2 for Measuring and Controlling Split Layer Privacy Leakage Using Fisher Information
Figure 3 for Measuring and Controlling Split Layer Privacy Leakage Using Fisher Information
Figure 4 for Measuring and Controlling Split Layer Privacy Leakage Using Fisher Information
Viaarxiv icon

Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis

Add code
Sep 12, 2022
Figure 1 for Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis
Figure 2 for Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis
Figure 3 for Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis
Figure 4 for Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis
Viaarxiv icon

Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning

Nov 25, 2021
Figure 1 for Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning
Figure 2 for Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning
Figure 3 for Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning
Figure 4 for Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning
Viaarxiv icon

Enabling Inference Privacy with Adaptive Noise Injection

Apr 06, 2021
Figure 1 for Enabling Inference Privacy with Adaptive Noise Injection
Figure 2 for Enabling Inference Privacy with Adaptive Noise Injection
Figure 3 for Enabling Inference Privacy with Adaptive Noise Injection
Figure 4 for Enabling Inference Privacy with Adaptive Noise Injection
Viaarxiv icon

MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation

Add code
May 06, 2020
Figure 1 for MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Figure 2 for MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Figure 3 for MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Figure 4 for MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Viaarxiv icon