Picture for Yufei Han

Yufei Han

INRIA Rocquencourt

Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting

Add code
Jan 05, 2026
Viaarxiv icon

From Risk to Resilience: Towards Assessing and Mitigating the Risk of Data Reconstruction Attacks in Federated Learning

Add code
Dec 17, 2025
Viaarxiv icon

Persistent Backdoor Attacks under Continual Fine-Tuning of LLMs

Add code
Dec 12, 2025
Viaarxiv icon

Dissecting Logical Reasoning in LLMs: A Fine-Grained Evaluation and Supervision Study

Add code
Jun 05, 2025
Viaarxiv icon

Trust Under Siege: Label Spoofing Attacks against Machine Learning for Android Malware Detection

Add code
Mar 14, 2025
Viaarxiv icon

NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images

Add code
Jun 11, 2024
Figure 1 for NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images
Figure 2 for NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images
Figure 3 for NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images
Figure 4 for NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images
Viaarxiv icon

Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning

Add code
Jun 10, 2024
Figure 1 for Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning
Figure 2 for Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning
Figure 3 for Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning
Figure 4 for Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning
Viaarxiv icon

Cross-Context Backdoor Attacks against Graph Prompt Learning

Add code
May 28, 2024
Figure 1 for Cross-Context Backdoor Attacks against Graph Prompt Learning
Figure 2 for Cross-Context Backdoor Attacks against Graph Prompt Learning
Figure 3 for Cross-Context Backdoor Attacks against Graph Prompt Learning
Figure 4 for Cross-Context Backdoor Attacks against Graph Prompt Learning
Viaarxiv icon

Defending Jailbreak Prompts via In-Context Adversarial Game

Add code
Feb 20, 2024
Figure 1 for Defending Jailbreak Prompts via In-Context Adversarial Game
Figure 2 for Defending Jailbreak Prompts via In-Context Adversarial Game
Figure 3 for Defending Jailbreak Prompts via In-Context Adversarial Game
Figure 4 for Defending Jailbreak Prompts via In-Context Adversarial Game
Viaarxiv icon

Manipulating Predictions over Discrete Inputs in Machine Teaching

Add code
Jan 31, 2024
Viaarxiv icon