Alert button
Picture for Haitao Zheng

Haitao Zheng

Alert button

Can Backdoor Attacks Survive Time-Varying Models?

Add code
Bookmark button
Alert button
Jun 08, 2022
Huiying Li, Arjun Nitin Bhagoji, Ben Y. Zhao, Haitao Zheng

Figure 1 for Can Backdoor Attacks Survive Time-Varying Models?
Figure 2 for Can Backdoor Attacks Survive Time-Varying Models?
Figure 3 for Can Backdoor Attacks Survive Time-Varying Models?
Figure 4 for Can Backdoor Attacks Survive Time-Varying Models?
Viaarxiv icon

Global Mixup: Eliminating Ambiguity with Clustering

Add code
Bookmark button
Alert button
Jun 06, 2022
Xiangjin Xie, Yangning Li, Wang Chen, Kai Ouyang, Li Jiang, Haitao Zheng

Figure 1 for Global Mixup: Eliminating Ambiguity with Clustering
Figure 2 for Global Mixup: Eliminating Ambiguity with Clustering
Figure 3 for Global Mixup: Eliminating Ambiguity with Clustering
Figure 4 for Global Mixup: Eliminating Ambiguity with Clustering
Viaarxiv icon

Assessing Privacy Risks from Feature Vector Reconstruction Attacks

Add code
Bookmark button
Alert button
Feb 11, 2022
Emily Wenger, Francesca Falzon, Josephine Passananti, Haitao Zheng, Ben Y. Zhao

Figure 1 for Assessing Privacy Risks from Feature Vector Reconstruction Attacks
Figure 2 for Assessing Privacy Risks from Feature Vector Reconstruction Attacks
Figure 3 for Assessing Privacy Risks from Feature Vector Reconstruction Attacks
Figure 4 for Assessing Privacy Risks from Feature Vector Reconstruction Attacks
Viaarxiv icon

SoK: Anti-Facial Recognition Technology

Add code
Bookmark button
Alert button
Dec 08, 2021
Emily Wenger, Shawn Shan, Haitao Zheng, Ben Y. Zhao

Figure 1 for SoK: Anti-Facial Recognition Technology
Figure 2 for SoK: Anti-Facial Recognition Technology
Figure 3 for SoK: Anti-Facial Recognition Technology
Figure 4 for SoK: Anti-Facial Recognition Technology
Viaarxiv icon

Traceback of Data Poisoning Attacks in Neural Networks

Add code
Bookmark button
Alert button
Oct 13, 2021
Shawn Shan, Arjun Nitin Bhagoji, Haitao Zheng, Ben Y. Zhao

Figure 1 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 2 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 3 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 4 for Traceback of Data Poisoning Attacks in Neural Networks
Viaarxiv icon

"Hello, It's Me": Deep Learning-based Speech Synthesis Attacks in the Real World

Add code
Bookmark button
Alert button
Sep 20, 2021
Emily Wenger, Max Bronckers, Christian Cianfarani, Jenna Cryan, Angela Sha, Haitao Zheng, Ben Y. Zhao

Figure 1 for "Hello, It's Me": Deep Learning-based Speech Synthesis Attacks in the Real World
Figure 2 for "Hello, It's Me": Deep Learning-based Speech Synthesis Attacks in the Real World
Figure 3 for "Hello, It's Me": Deep Learning-based Speech Synthesis Attacks in the Real World
Figure 4 for "Hello, It's Me": Deep Learning-based Speech Synthesis Attacks in the Real World
Viaarxiv icon

ASR-GLUE: A New Multi-task Benchmark for ASR-Robust Natural Language Understanding

Add code
Bookmark button
Alert button
Aug 30, 2021
Lingyun Feng, Jianwei Yu, Deng Cai, Songxiang Liu, Haitao Zheng, Yan Wang

Figure 1 for ASR-GLUE: A New Multi-task Benchmark for ASR-Robust Natural Language Understanding
Figure 2 for ASR-GLUE: A New Multi-task Benchmark for ASR-Robust Natural Language Understanding
Figure 3 for ASR-GLUE: A New Multi-task Benchmark for ASR-Robust Natural Language Understanding
Figure 4 for ASR-GLUE: A New Multi-task Benchmark for ASR-Robust Natural Language Understanding
Viaarxiv icon

Understanding the Effect of Bias in Deep Anomaly Detection

Add code
Bookmark button
Alert button
May 16, 2021
Ziyu Ye, Yuxin Chen, Haitao Zheng

Figure 1 for Understanding the Effect of Bias in Deep Anomaly Detection
Figure 2 for Understanding the Effect of Bias in Deep Anomaly Detection
Figure 3 for Understanding the Effect of Bias in Deep Anomaly Detection
Figure 4 for Understanding the Effect of Bias in Deep Anomaly Detection
Viaarxiv icon

Backdoor Attacks on Facial Recognition in the Physical World

Add code
Bookmark button
Alert button
Jun 25, 2020
Emily Wenger, Josephine Passananti, Yuanshun Yao, Haitao Zheng, Ben Y. Zhao

Figure 1 for Backdoor Attacks on Facial Recognition in the Physical World
Figure 2 for Backdoor Attacks on Facial Recognition in the Physical World
Figure 3 for Backdoor Attacks on Facial Recognition in the Physical World
Figure 4 for Backdoor Attacks on Facial Recognition in the Physical World
Viaarxiv icon

Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks

Add code
Bookmark button
Alert button
Jun 24, 2020
Huiying Li, Shawn Shan, Emily Wenger, Jiayun Zhang, Haitao Zheng, Ben Y. Zhao

Figure 1 for Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks
Figure 2 for Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks
Figure 3 for Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks
Figure 4 for Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks
Viaarxiv icon