Alert button
Picture for Zhenting Wang

Zhenting Wang

Alert button

Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?

Add code
Bookmark button
Alert button
Apr 10, 2024
Mingyu Jin, Qinkai Yu, Jingyuan Huang, Qingcheng Zeng, Zhenting Wang, Wenyue Hua, Haiyan Zhao, Kai Mei, Yanda Meng, Kaize Ding, Fan Yang, Mengnan Du, Yongfeng Zhang

Viaarxiv icon

Finding needles in a haystack: A Black-Box Approach to Invisible Watermark Detection

Add code
Bookmark button
Alert button
Mar 30, 2024
Minzhou Pan, Zhenting Wang, Xin Dong, Vikash Sehwag, Lingjuan Lyu, Xue Lin

Viaarxiv icon

How to Detect Unauthorized Data Usages in Text-to-image Diffusion Models

Add code
Bookmark button
Alert button
Jul 06, 2023
Zhenting Wang, Chen Chen, Yuchen Liu, Lingjuan Lyu, Dimitris Metaxas, Shiqing Ma

Figure 1 for How to Detect Unauthorized Data Usages in Text-to-image Diffusion Models
Figure 2 for How to Detect Unauthorized Data Usages in Text-to-image Diffusion Models
Figure 3 for How to Detect Unauthorized Data Usages in Text-to-image Diffusion Models
Figure 4 for How to Detect Unauthorized Data Usages in Text-to-image Diffusion Models
Viaarxiv icon

Alteration-free and Model-agnostic Origin Attribution of Generated Images

Add code
Bookmark button
Alert button
May 29, 2023
Zhenting Wang, Chen Chen, Yi Zeng, Lingjuan Lyu, Shiqing Ma

Figure 1 for Alteration-free and Model-agnostic Origin Attribution of Generated Images
Figure 2 for Alteration-free and Model-agnostic Origin Attribution of Generated Images
Figure 3 for Alteration-free and Model-agnostic Origin Attribution of Generated Images
Figure 4 for Alteration-free and Model-agnostic Origin Attribution of Generated Images
Viaarxiv icon

NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models

Add code
Bookmark button
Alert button
May 28, 2023
Kai Mei, Zheng Li, Zhenting Wang, Yang Zhang, Shiqing Ma

Figure 1 for NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Figure 2 for NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Figure 3 for NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Figure 4 for NOTABLE: Transferable Backdoor Attacks Against Prompt-based NLP Models
Viaarxiv icon

UNICORN: A Unified Backdoor Trigger Inversion Framework

Add code
Bookmark button
Alert button
Apr 05, 2023
Zhenting Wang, Kai Mei, Juan Zhai, Shiqing Ma

Figure 1 for UNICORN: A Unified Backdoor Trigger Inversion Framework
Figure 2 for UNICORN: A Unified Backdoor Trigger Inversion Framework
Figure 3 for UNICORN: A Unified Backdoor Trigger Inversion Framework
Figure 4 for UNICORN: A Unified Backdoor Trigger Inversion Framework
Viaarxiv icon

Backdoor Vulnerabilities in Normally Trained Deep Learning Models

Add code
Bookmark button
Alert button
Nov 29, 2022
Guanhong Tao, Zhenting Wang, Siyuan Cheng, Shiqing Ma, Shengwei An, Yingqi Liu, Guangyu Shen, Zhuo Zhang, Yunshu Mao, Xiangyu Zhang

Figure 1 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 2 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 3 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 4 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Viaarxiv icon

Rethinking the Reverse-engineering of Trojan Triggers

Add code
Bookmark button
Alert button
Oct 27, 2022
Zhenting Wang, Kai Mei, Hailun Ding, Juan Zhai, Shiqing Ma

Figure 1 for Rethinking the Reverse-engineering of Trojan Triggers
Figure 2 for Rethinking the Reverse-engineering of Trojan Triggers
Figure 3 for Rethinking the Reverse-engineering of Trojan Triggers
Figure 4 for Rethinking the Reverse-engineering of Trojan Triggers
Viaarxiv icon

BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning

Add code
Bookmark button
Alert button
May 26, 2022
Zhenting Wang, Juan Zhai, Shiqing Ma

Figure 1 for BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning
Figure 2 for BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning
Figure 3 for BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning
Figure 4 for BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning
Viaarxiv icon

Neural Network Trojans Analysis and Mitigation from the Input Domain

Add code
Bookmark button
Alert button
Feb 16, 2022
Zhenting Wang, Hailun Ding, Juan Zhai, Shiqing Ma

Figure 1 for Neural Network Trojans Analysis and Mitigation from the Input Domain
Figure 2 for Neural Network Trojans Analysis and Mitigation from the Input Domain
Figure 3 for Neural Network Trojans Analysis and Mitigation from the Input Domain
Figure 4 for Neural Network Trojans Analysis and Mitigation from the Input Domain
Viaarxiv icon