Picture for Ninghui Li

Ninghui Li

SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks

Add code
Jun 12, 2025
Viaarxiv icon

LLM Agents Should Employ Security Principles

Add code
May 29, 2025
Viaarxiv icon

CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling

Add code
Jan 27, 2025
Viaarxiv icon

Federated Learning Privacy: Attacks, Defenses, Applications, and Policy Landscape - A Survey

Add code
May 06, 2024
Viaarxiv icon

Towards Principled Assessment of Tabular Data Synthesis Algorithms

Add code
Feb 09, 2024
Viaarxiv icon

MIST: Defending Against Membership Inference Attacks Through Membership-Invariant Subspace Training

Add code
Nov 02, 2023
Figure 1 for MIST: Defending Against Membership Inference Attacks Through Membership-Invariant Subspace Training
Figure 2 for MIST: Defending Against Membership Inference Attacks Through Membership-Invariant Subspace Training
Figure 3 for MIST: Defending Against Membership Inference Attacks Through Membership-Invariant Subspace Training
Figure 4 for MIST: Defending Against Membership Inference Attacks Through Membership-Invariant Subspace Training
Viaarxiv icon

Differentially Private Vertical Federated Clustering

Add code
Aug 02, 2022
Figure 1 for Differentially Private Vertical Federated Clustering
Figure 2 for Differentially Private Vertical Federated Clustering
Figure 3 for Differentially Private Vertical Federated Clustering
Figure 4 for Differentially Private Vertical Federated Clustering
Viaarxiv icon

Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models

Add code
Jan 23, 2022
Figure 1 for Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models
Figure 2 for Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models
Figure 3 for Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models
Figure 4 for Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models
Viaarxiv icon

Black-box Model Inversion Attribute Inference Attacks on Classification Models

Add code
Dec 07, 2020
Figure 1 for Black-box Model Inversion Attribute Inference Attacks on Classification Models
Figure 2 for Black-box Model Inversion Attribute Inference Attacks on Classification Models
Figure 3 for Black-box Model Inversion Attribute Inference Attacks on Classification Models
Figure 4 for Black-box Model Inversion Attribute Inference Attacks on Classification Models
Viaarxiv icon

Continuous Release of Data Streams under both Centralized and Local Differential Privacy

Add code
May 24, 2020
Figure 1 for Continuous Release of Data Streams under both Centralized and Local Differential Privacy
Figure 2 for Continuous Release of Data Streams under both Centralized and Local Differential Privacy
Figure 3 for Continuous Release of Data Streams under both Centralized and Local Differential Privacy
Figure 4 for Continuous Release of Data Streams under both Centralized and Local Differential Privacy
Viaarxiv icon