Picture for Zeyu Qin

Zeyu Qin

Entropic Distribution Matching in Supervised Fine-tuning of LLMs: Less Overfitting and Better Diversity

Add code
Aug 29, 2024
Figure 1 for Entropic Distribution Matching in Supervised Fine-tuning of LLMs: Less Overfitting and Better Diversity
Figure 2 for Entropic Distribution Matching in Supervised Fine-tuning of LLMs: Less Overfitting and Better Diversity
Figure 3 for Entropic Distribution Matching in Supervised Fine-tuning of LLMs: Less Overfitting and Better Diversity
Figure 4 for Entropic Distribution Matching in Supervised Fine-tuning of LLMs: Less Overfitting and Better Diversity
Viaarxiv icon

MoFO: Momentum-Filtered Optimizer for Mitigating Forgetting in LLM Fine-Tuning

Add code
Jul 31, 2024
Viaarxiv icon

Step-On-Feet Tuning: Scaling Self-Alignment of LLMs via Bootstrapping

Add code
Feb 22, 2024
Viaarxiv icon

Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators

Add code
Oct 11, 2023
Figure 1 for Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators
Figure 2 for Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators
Figure 3 for Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators
Figure 4 for Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators
Viaarxiv icon

Towards Stable Backdoor Purification through Feature Shift Tuning

Add code
Oct 07, 2023
Figure 1 for Towards Stable Backdoor Purification through Feature Shift Tuning
Figure 2 for Towards Stable Backdoor Purification through Feature Shift Tuning
Figure 3 for Towards Stable Backdoor Purification through Feature Shift Tuning
Figure 4 for Towards Stable Backdoor Purification through Feature Shift Tuning
Viaarxiv icon

Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks

Add code
Feb 03, 2023
Figure 1 for Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks
Figure 2 for Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks
Figure 3 for Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks
Figure 4 for Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks
Viaarxiv icon

Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation

Add code
Oct 12, 2022
Figure 1 for Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation
Figure 2 for Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation
Figure 3 for Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation
Figure 4 for Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation
Viaarxiv icon

Adaptive Smoothness-weighted Adversarial Training for Multiple Perturbations with Its Stability Analysis

Add code
Oct 02, 2022
Figure 1 for Adaptive Smoothness-weighted Adversarial Training for Multiple Perturbations with Its Stability Analysis
Figure 2 for Adaptive Smoothness-weighted Adversarial Training for Multiple Perturbations with Its Stability Analysis
Figure 3 for Adaptive Smoothness-weighted Adversarial Training for Multiple Perturbations with Its Stability Analysis
Figure 4 for Adaptive Smoothness-weighted Adversarial Training for Multiple Perturbations with Its Stability Analysis
Viaarxiv icon

Theoretical Study of Random Noise Defense against Query-Based Black-Box Attacks

Add code
Apr 23, 2021
Figure 1 for Theoretical Study of Random Noise Defense against Query-Based Black-Box Attacks
Figure 2 for Theoretical Study of Random Noise Defense against Query-Based Black-Box Attacks
Figure 3 for Theoretical Study of Random Noise Defense against Query-Based Black-Box Attacks
Figure 4 for Theoretical Study of Random Noise Defense against Query-Based Black-Box Attacks
Viaarxiv icon