Picture for Peihua Mai

Peihua Mai

RFLPA: A Robust Federated Learning Framework against Poisoning Attacks with Secure Aggregation

Add code
May 24, 2024
Figure 1 for RFLPA: A Robust Federated Learning Framework against Poisoning Attacks with Secure Aggregation
Figure 2 for RFLPA: A Robust Federated Learning Framework against Poisoning Attacks with Secure Aggregation
Figure 3 for RFLPA: A Robust Federated Learning Framework against Poisoning Attacks with Secure Aggregation
Figure 4 for RFLPA: A Robust Federated Learning Framework against Poisoning Attacks with Secure Aggregation
Viaarxiv icon

Teach Large Language Models to Forget Privacy

Add code
Dec 30, 2023
Figure 1 for Teach Large Language Models to Forget Privacy
Figure 2 for Teach Large Language Models to Forget Privacy
Figure 3 for Teach Large Language Models to Forget Privacy
Figure 4 for Teach Large Language Models to Forget Privacy
Viaarxiv icon

Split-and-Denoise: Protect large language model inference with local differential privacy

Add code
Oct 13, 2023
Figure 1 for Split-and-Denoise: Protect large language model inference with local differential privacy
Figure 2 for Split-and-Denoise: Protect large language model inference with local differential privacy
Figure 3 for Split-and-Denoise: Protect large language model inference with local differential privacy
Figure 4 for Split-and-Denoise: Protect large language model inference with local differential privacy
Viaarxiv icon