Picture for Yechao Zhang

Yechao Zhang

Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples

Add code
Mar 19, 2024
Figure 1 for Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples
Figure 2 for Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples
Figure 3 for Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples
Figure 4 for Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples
Viaarxiv icon

AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning

Add code
Aug 14, 2023
Figure 1 for AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning
Figure 2 for AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning
Figure 3 for AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning
Figure 4 for AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning
Viaarxiv icon

Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training

Jul 19, 2023
Figure 1 for Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training
Figure 2 for Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training
Figure 3 for Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training
Figure 4 for Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training
Viaarxiv icon

BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label

Add code
Jul 13, 2022
Figure 1 for BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label
Figure 2 for BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label
Figure 3 for BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label
Figure 4 for BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label
Viaarxiv icon

Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer

Add code
Mar 28, 2022
Figure 1 for Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer
Figure 2 for Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer
Figure 3 for Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer
Figure 4 for Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer
Viaarxiv icon

Towards Efficient Data-Centric Robust Machine Learning with Noise-based Augmentation

Mar 08, 2022
Figure 1 for Towards Efficient Data-Centric Robust Machine Learning with Noise-based Augmentation
Figure 2 for Towards Efficient Data-Centric Robust Machine Learning with Noise-based Augmentation
Figure 3 for Towards Efficient Data-Centric Robust Machine Learning with Noise-based Augmentation
Figure 4 for Towards Efficient Data-Centric Robust Machine Learning with Noise-based Augmentation
Viaarxiv icon