Alert button
Picture for Xilie Xu

Xilie Xu

Alert button

Privacy-Preserving Low-Rank Adaptation for Latent Diffusion Models

Add code
Bookmark button
Alert button
Feb 19, 2024
Zihao Luo, Xilie Xu, Feng Liu, Yun Sing Koh, Di Wang, Jingfeng Zhang

Viaarxiv icon

AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework

Add code
Bookmark button
Alert button
Oct 03, 2023
Xilie Xu, Jingfeng Zhang, Mohan Kankanhalli

Viaarxiv icon

Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization

Add code
Bookmark button
Alert button
Apr 30, 2023
Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli

Figure 1 for Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization
Figure 2 for Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization
Figure 3 for Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization
Figure 4 for Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization
Viaarxiv icon

Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection

Add code
Bookmark button
Alert button
Feb 08, 2023
Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli

Figure 1 for Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection
Figure 2 for Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection
Figure 3 for Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection
Figure 4 for Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection
Viaarxiv icon

Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests

Add code
Bookmark button
Alert button
Feb 07, 2022
Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli

Figure 1 for Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests
Figure 2 for Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests
Figure 3 for Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests
Figure 4 for Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests
Viaarxiv icon

NoiLIn: Do Noisy Labels Always Hurt Adversarial Training?

Add code
Bookmark button
Alert button
May 31, 2021
Jingfeng Zhang, Xilie Xu, Bo Han, Tongliang Liu, Gang Niu, Lizhen Cui, Masashi Sugiyama

Figure 1 for NoiLIn: Do Noisy Labels Always Hurt Adversarial Training?
Figure 2 for NoiLIn: Do Noisy Labels Always Hurt Adversarial Training?
Figure 3 for NoiLIn: Do Noisy Labels Always Hurt Adversarial Training?
Figure 4 for NoiLIn: Do Noisy Labels Always Hurt Adversarial Training?
Viaarxiv icon

Guided Interpolation for Adversarial Training

Add code
Bookmark button
Alert button
Feb 15, 2021
Chen Chen, Jingfeng Zhang, Xilie Xu, Tianlei Hu, Gang Niu, Gang Chen, Masashi Sugiyama

Figure 1 for Guided Interpolation for Adversarial Training
Figure 2 for Guided Interpolation for Adversarial Training
Figure 3 for Guided Interpolation for Adversarial Training
Figure 4 for Guided Interpolation for Adversarial Training
Viaarxiv icon

Attacks Which Do Not Kill Training Make Adversarial Learning Stronger

Add code
Bookmark button
Alert button
Feb 26, 2020
Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, Mohan Kankanhalli

Figure 1 for Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Figure 2 for Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Figure 3 for Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Figure 4 for Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Viaarxiv icon