Alert button
Picture for Chenan Wang

Chenan Wang

Alert button

Detection and Recovery Against Deep Neural Network Fault Injection Attacks Based on Contrastive Learning

Add code
Bookmark button
Alert button
Jan 30, 2024
Chenan Wang, Pu Zhao, Siyue Wang, Xue Lin

Viaarxiv icon

Dynamic Adversarial Attacks on Autonomous Driving Systems

Add code
Bookmark button
Alert button
Dec 10, 2023
Amirhosein Chahe, Chenan Wang, Abhishek Jeyapratap, Kaidi Xu, Lifeng Zhou

Viaarxiv icon

Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?

Add code
Bookmark button
Alert button
Nov 30, 2023
Zhengyue Zhao, Jinhao Duan, Kaidi Xu, Chenan Wang, Rui Zhangp Zidong Dup Qi Guo, Xing Hu

Viaarxiv icon

Semantic Adversarial Attacks via Diffusion Models

Add code
Bookmark button
Alert button
Sep 14, 2023
Chenan Wang, Jinhao Duan, Chaowei Xiao, Edward Kim, Matthew Stamm, Kaidi Xu

Viaarxiv icon

Shifting Attention to Relevance: Towards the Uncertainty Estimation of Large Language Models

Add code
Bookmark button
Alert button
Jul 03, 2023
Jinhao Duan, Hao Cheng, Shiqi Wang, Chenan Wang, Alex Zavalny, Renjing Xu, Bhavya Kailkhura, Kaidi Xu

Figure 1 for Shifting Attention to Relevance: Towards the Uncertainty Estimation of Large Language Models
Figure 2 for Shifting Attention to Relevance: Towards the Uncertainty Estimation of Large Language Models
Figure 3 for Shifting Attention to Relevance: Towards the Uncertainty Estimation of Large Language Models
Figure 4 for Shifting Attention to Relevance: Towards the Uncertainty Estimation of Large Language Models
Viaarxiv icon

Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation

Add code
Bookmark button
Alert button
Jun 02, 2023
Zhengyue Zhao, Jinhao Duan, Xing Hu, Kaidi Xu, Chenan Wang, Rui Zhang, Zidong Du, Qi Guo, Yunji Chen

Figure 1 for Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation
Figure 2 for Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation
Figure 3 for Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation
Figure 4 for Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation
Viaarxiv icon

Mixture of Robust Experts (MoRE): A Flexible Defense Against Multiple Perturbations

Add code
Bookmark button
Alert button
Apr 21, 2021
Hao Cheng, Kaidi Xu, Chenan Wang, Xue Lin, Bhavya Kailkhura, Ryan Goldhahn

Figure 1 for Mixture of Robust Experts (MoRE): A Flexible Defense Against Multiple Perturbations
Figure 2 for Mixture of Robust Experts (MoRE): A Flexible Defense Against Multiple Perturbations
Viaarxiv icon