Picture for Chongyu Fan

Chongyu Fan

Reasoning Model Unlearning: Forgetting Traces, Not Just Answers, While Preserving Reasoning Skills

Add code
Jun 15, 2025
Viaarxiv icon

EPiC: Towards Lossless Speedup for Reasoning Training through Edge-Preserving CoT Condensation

Add code
Jun 04, 2025
Viaarxiv icon

Towards LLM Unlearning Resilient to Relearning Attacks: A Sharpness-Aware Minimization Perspective and Beyond

Add code
Feb 07, 2025
Viaarxiv icon

Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning

Add code
Oct 09, 2024
Figure 1 for Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning
Figure 2 for Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning
Figure 3 for Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning
Figure 4 for Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning
Viaarxiv icon

Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models

Add code
May 24, 2024
Viaarxiv icon

Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning

Add code
Mar 12, 2024
Viaarxiv icon

SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation

Add code
Oct 19, 2023
Viaarxiv icon