Alert button
Picture for Shawn Shan

Shawn Shan

Alert button

Organic or Diffused: Can We Distinguish Human Art from AI-generated Images?

Add code
Bookmark button
Alert button
Feb 06, 2024
Anna Yoo Jeong Ha, Josephine Passananti, Ronik Bhaskar, Shawn Shan, Reid Southen, Haitao Zheng, Ben Y. Zhao

Viaarxiv icon

Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models

Add code
Bookmark button
Alert button
Oct 20, 2023
Shawn Shan, Wenxin Ding, Josephine Passananti, Haitao Zheng, Ben Y. Zhao

Figure 1 for Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
Figure 2 for Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
Figure 3 for Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
Figure 4 for Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
Viaarxiv icon

SoK: Anti-Facial Recognition Technology

Add code
Bookmark button
Alert button
Dec 08, 2021
Emily Wenger, Shawn Shan, Haitao Zheng, Ben Y. Zhao

Figure 1 for SoK: Anti-Facial Recognition Technology
Figure 2 for SoK: Anti-Facial Recognition Technology
Figure 3 for SoK: Anti-Facial Recognition Technology
Figure 4 for SoK: Anti-Facial Recognition Technology
Viaarxiv icon

Traceback of Data Poisoning Attacks in Neural Networks

Add code
Bookmark button
Alert button
Oct 13, 2021
Shawn Shan, Arjun Nitin Bhagoji, Haitao Zheng, Ben Y. Zhao

Figure 1 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 2 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 3 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 4 for Traceback of Data Poisoning Attacks in Neural Networks
Viaarxiv icon

Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks

Add code
Bookmark button
Alert button
Jun 24, 2020
Huiying Li, Shawn Shan, Emily Wenger, Jiayun Zhang, Haitao Zheng, Ben Y. Zhao

Figure 1 for Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks
Figure 2 for Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks
Figure 3 for Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks
Figure 4 for Blacklight: Defending Black-Box Adversarial Attacks on Deep Neural Networks
Viaarxiv icon

Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models

Add code
Bookmark button
Alert button
Feb 19, 2020
Shawn Shan, Emily Wenger, Jiayun Zhang, Huiying Li, Haitao Zheng, Ben Y. Zhao

Figure 1 for Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models
Figure 2 for Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models
Figure 3 for Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models
Figure 4 for Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models
Viaarxiv icon

Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks

Add code
Bookmark button
Alert button
Apr 18, 2019
Shawn Shan, Emily Willson, Bolun Wang, Bo Li, Haitao Zheng, Ben Y. Zhao

Figure 1 for Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks
Figure 2 for Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks
Figure 3 for Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks
Figure 4 for Gotta Catch 'Em All: Using Concealed Trapdoors to Detect Adversarial Attacks on Neural Networks
Viaarxiv icon