Picture for Yunlong Mao

Yunlong Mao

Towards Privacy-Preserving LLM Inference via Collaborative Obfuscation (Technical Report)

Add code
Mar 02, 2026
Viaarxiv icon

On Evaluating the Poisoning Robustness of Federated Learning under Local Differential Privacy

Add code
Sep 05, 2025
Viaarxiv icon

LabObf: A Label Protection Scheme for Vertical Federated Learning Through Label Obfuscation

Add code
May 27, 2024
Figure 1 for LabObf: A Label Protection Scheme for Vertical Federated Learning Through Label Obfuscation
Figure 2 for LabObf: A Label Protection Scheme for Vertical Federated Learning Through Label Obfuscation
Figure 3 for LabObf: A Label Protection Scheme for Vertical Federated Learning Through Label Obfuscation
Figure 4 for LabObf: A Label Protection Scheme for Vertical Federated Learning Through Label Obfuscation
Viaarxiv icon

A Split-and-Privatize Framework for Large Language Model Fine-Tuning

Add code
Dec 25, 2023
Figure 1 for A Split-and-Privatize Framework for Large Language Model Fine-Tuning
Figure 2 for A Split-and-Privatize Framework for Large Language Model Fine-Tuning
Figure 3 for A Split-and-Privatize Framework for Large Language Model Fine-Tuning
Figure 4 for A Split-and-Privatize Framework for Large Language Model Fine-Tuning
Viaarxiv icon

Secure Split Learning against Property Inference, Data Reconstruction, and Feature Space Hijacking Attacks

Add code
Apr 19, 2023
Figure 1 for Secure Split Learning against Property Inference, Data Reconstruction, and Feature Space Hijacking Attacks
Figure 2 for Secure Split Learning against Property Inference, Data Reconstruction, and Feature Space Hijacking Attacks
Figure 3 for Secure Split Learning against Property Inference, Data Reconstruction, and Feature Space Hijacking Attacks
Figure 4 for Secure Split Learning against Property Inference, Data Reconstruction, and Feature Space Hijacking Attacks
Viaarxiv icon