Picture for Shruti Tople

Shruti Tople

Microsoft Research

Closed-Form Bounds for DP-SGD against Record-level Inference

Add code
Feb 22, 2024
Viaarxiv icon

Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective

Add code
Nov 27, 2023
Figure 1 for Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
Figure 2 for Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
Figure 3 for Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
Figure 4 for Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
Viaarxiv icon

SoK: Memorization in General-Purpose Large Language Models

Add code
Oct 24, 2023
Figure 1 for SoK: Memorization in General-Purpose Large Language Models
Viaarxiv icon

Why Train More? Effective and Efficient Membership Inference via Memorization

Add code
Oct 12, 2023
Figure 1 for Why Train More? Effective and Efficient Membership Inference via Memorization
Figure 2 for Why Train More? Effective and Efficient Membership Inference via Memorization
Figure 3 for Why Train More? Effective and Efficient Membership Inference via Memorization
Figure 4 for Why Train More? Effective and Efficient Membership Inference via Memorization
Viaarxiv icon

Re-aligning Shadow Models can Improve White-box Membership Inference Attacks

Add code
Jun 08, 2023
Figure 1 for Re-aligning Shadow Models can Improve White-box Membership Inference Attacks
Figure 2 for Re-aligning Shadow Models can Improve White-box Membership Inference Attacks
Figure 3 for Re-aligning Shadow Models can Improve White-box Membership Inference Attacks
Figure 4 for Re-aligning Shadow Models can Improve White-box Membership Inference Attacks
Viaarxiv icon

On the Efficacy of Differentially Private Few-shot Image Classification

Add code
Feb 02, 2023
Figure 1 for On the Efficacy of Differentially Private Few-shot Image Classification
Figure 2 for On the Efficacy of Differentially Private Few-shot Image Classification
Figure 3 for On the Efficacy of Differentially Private Few-shot Image Classification
Figure 4 for On the Efficacy of Differentially Private Few-shot Image Classification
Viaarxiv icon

Analyzing Leakage of Personally Identifiable Information in Language Models

Add code
Feb 01, 2023
Figure 1 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 2 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 3 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 4 for Analyzing Leakage of Personally Identifiable Information in Language Models
Viaarxiv icon

SoK: Let The Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning

Add code
Dec 21, 2022
Figure 1 for SoK: Let The Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning
Figure 2 for SoK: Let The Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning
Figure 3 for SoK: Let The Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning
Viaarxiv icon

Invariant Aggregator for Defending Federated Backdoor Attacks

Add code
Oct 04, 2022
Figure 1 for Invariant Aggregator for Defending Federated Backdoor Attacks
Figure 2 for Invariant Aggregator for Defending Federated Backdoor Attacks
Figure 3 for Invariant Aggregator for Defending Federated Backdoor Attacks
Figure 4 for Invariant Aggregator for Defending Federated Backdoor Attacks
Viaarxiv icon

Membership Inference Attacks and Generalization: A Causal Perspective

Add code
Sep 18, 2022
Figure 1 for Membership Inference Attacks and Generalization: A Causal Perspective
Figure 2 for Membership Inference Attacks and Generalization: A Causal Perspective
Figure 3 for Membership Inference Attacks and Generalization: A Causal Perspective
Figure 4 for Membership Inference Attacks and Generalization: A Causal Perspective
Viaarxiv icon