Alert button
Picture for Leo Yu Zhang

Leo Yu Zhang

Alert button

Client-side Gradient Inversion Against Federated Learning from Poisoning

Add code
Bookmark button
Alert button
Sep 14, 2023
Jiaheng Wei, Yanjun Zhang, Leo Yu Zhang, Chao Chen, Shirui Pan, Kok-Leong Ong, Jun Zhang, Yang Xiang

Figure 1 for Client-side Gradient Inversion Against Federated Learning from Poisoning
Figure 2 for Client-side Gradient Inversion Against Federated Learning from Poisoning
Figure 3 for Client-side Gradient Inversion Against Federated Learning from Poisoning
Figure 4 for Client-side Gradient Inversion Against Federated Learning from Poisoning
Viaarxiv icon

Downstream-agnostic Adversarial Examples

Add code
Bookmark button
Alert button
Aug 14, 2023
Ziqi Zhou, Shengshan Hu, Ruizhi Zhao, Qian Wang, Leo Yu Zhang, Junhui Hou, Hai Jin

Figure 1 for Downstream-agnostic Adversarial Examples
Figure 2 for Downstream-agnostic Adversarial Examples
Figure 3 for Downstream-agnostic Adversarial Examples
Figure 4 for Downstream-agnostic Adversarial Examples
Viaarxiv icon

Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training

Add code
Bookmark button
Alert button
Jul 19, 2023
Yechao Zhang, Shengshan Hu, Leo Yu Zhang, Junyu Shi, Minghui Li, Xiaogeng Liu, Wei Wan, Hai Jin

Figure 1 for Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training
Figure 2 for Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training
Figure 3 for Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training
Figure 4 for Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training
Viaarxiv icon

Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning

Add code
Bookmark button
Alert button
Apr 21, 2023
Hangtao Zhang, Zeming Yao, Leo Yu Zhang, Shengshan Hu, Chao Chen, Alan Liew, Zhetao Li

Figure 1 for Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning
Figure 2 for Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning
Figure 3 for Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning
Figure 4 for Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning
Viaarxiv icon

Masked Language Model Based Textual Adversarial Example Detection

Add code
Bookmark button
Alert button
Apr 19, 2023
Xiaomei Zhang, Zhaoxi Zhang, Qi Zhong, Xufei Zheng, Yanjun Zhang, Shengshan Hu, Leo Yu Zhang

Figure 1 for Masked Language Model Based Textual Adversarial Example Detection
Figure 2 for Masked Language Model Based Textual Adversarial Example Detection
Figure 3 for Masked Language Model Based Textual Adversarial Example Detection
Figure 4 for Masked Language Model Based Textual Adversarial Example Detection
Viaarxiv icon

PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models Against Adversarial Examples

Add code
Bookmark button
Alert button
Dec 01, 2022
Shengshan Hu, Junwei Zhang, Wei Liu, Junhui Hou, Minghui Li, Leo Yu Zhang, Hai Jin, Lichao Sun

Figure 1 for PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models Against Adversarial Examples
Figure 2 for PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models Against Adversarial Examples
Figure 3 for PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models Against Adversarial Examples
Figure 4 for PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models Against Adversarial Examples
Viaarxiv icon

BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label

Add code
Bookmark button
Alert button
Jul 13, 2022
Shengshan Hu, Ziqi Zhou, Yechao Zhang, Leo Yu Zhang, Yifeng Zheng, Yuanyuan HE, Hai Jin

Figure 1 for BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label
Figure 2 for BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label
Figure 3 for BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label
Figure 4 for BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label
Viaarxiv icon

Evaluating Membership Inference Through Adversarial Robustness

Add code
Bookmark button
Alert button
May 14, 2022
Zhaoxi Zhang, Leo Yu Zhang, Xufei Zheng, Bilal Hussain Abbasi, Shengshan Hu

Figure 1 for Evaluating Membership Inference Through Adversarial Robustness
Figure 2 for Evaluating Membership Inference Through Adversarial Robustness
Figure 3 for Evaluating Membership Inference Through Adversarial Robustness
Figure 4 for Evaluating Membership Inference Through Adversarial Robustness
Viaarxiv icon