Picture for Dingfan Chen

Dingfan Chen

Inside the Black Box: Detecting Data Leakage in Pre-trained Language Encoders

Add code
Aug 20, 2024
Figure 1 for Inside the Black Box: Detecting Data Leakage in Pre-trained Language Encoders
Figure 2 for Inside the Black Box: Detecting Data Leakage in Pre-trained Language Encoders
Figure 3 for Inside the Black Box: Detecting Data Leakage in Pre-trained Language Encoders
Figure 4 for Inside the Black Box: Detecting Data Leakage in Pre-trained Language Encoders
Viaarxiv icon

PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics

Add code
Apr 06, 2024
Figure 1 for PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics
Figure 2 for PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics
Figure 3 for PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics
Figure 4 for PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics
Viaarxiv icon

Towards Biologically Plausible and Private Gene Expression Data Generation

Add code
Feb 07, 2024
Figure 1 for Towards Biologically Plausible and Private Gene Expression Data Generation
Figure 2 for Towards Biologically Plausible and Private Gene Expression Data Generation
Figure 3 for Towards Biologically Plausible and Private Gene Expression Data Generation
Figure 4 for Towards Biologically Plausible and Private Gene Expression Data Generation
Viaarxiv icon

A Unified View of Differentially Private Deep Generative Modeling

Add code
Sep 27, 2023
Figure 1 for A Unified View of Differentially Private Deep Generative Modeling
Figure 2 for A Unified View of Differentially Private Deep Generative Modeling
Figure 3 for A Unified View of Differentially Private Deep Generative Modeling
Figure 4 for A Unified View of Differentially Private Deep Generative Modeling
Viaarxiv icon

MargCTGAN: A "Marginally'' Better CTGAN for the Low Sample Regime

Add code
Jul 16, 2023
Viaarxiv icon

Data Forensics in Diffusion Models: A Systematic Analysis of Membership Privacy

Add code
Feb 15, 2023
Viaarxiv icon

Fed-GLOSS-DP: Federated, Global Learning using Synthetic Sets with Record Level Differential Privacy

Add code
Feb 02, 2023
Figure 1 for Fed-GLOSS-DP: Federated, Global Learning using Synthetic Sets with Record Level Differential Privacy
Figure 2 for Fed-GLOSS-DP: Federated, Global Learning using Synthetic Sets with Record Level Differential Privacy
Figure 3 for Fed-GLOSS-DP: Federated, Global Learning using Synthetic Sets with Record Level Differential Privacy
Figure 4 for Fed-GLOSS-DP: Federated, Global Learning using Synthetic Sets with Record Level Differential Privacy
Viaarxiv icon

Private Set Generation with Discriminative Information

Add code
Nov 07, 2022
Figure 1 for Private Set Generation with Discriminative Information
Figure 2 for Private Set Generation with Discriminative Information
Figure 3 for Private Set Generation with Discriminative Information
Figure 4 for Private Set Generation with Discriminative Information
Viaarxiv icon

RelaxLoss: Defending Membership Inference Attacks without Losing Utility

Add code
Jul 12, 2022
Figure 1 for RelaxLoss: Defending Membership Inference Attacks without Losing Utility
Figure 2 for RelaxLoss: Defending Membership Inference Attacks without Losing Utility
Figure 3 for RelaxLoss: Defending Membership Inference Attacks without Losing Utility
Figure 4 for RelaxLoss: Defending Membership Inference Attacks without Losing Utility
Viaarxiv icon

Responsible Disclosure of Generative Models Using Scalable Fingerprinting

Add code
Dec 16, 2020
Figure 1 for Responsible Disclosure of Generative Models Using Scalable Fingerprinting
Figure 2 for Responsible Disclosure of Generative Models Using Scalable Fingerprinting
Figure 3 for Responsible Disclosure of Generative Models Using Scalable Fingerprinting
Figure 4 for Responsible Disclosure of Generative Models Using Scalable Fingerprinting
Viaarxiv icon