Alert button
Picture for Neil Zhenqiang Gong

Neil Zhenqiang Gong

Alert button

PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees

Add code
Bookmark button
Alert button
Mar 03, 2023
Jinghuai Zhang, Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong

Figure 1 for PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees
Figure 2 for PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees
Figure 3 for PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees
Figure 4 for PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees
Viaarxiv icon

REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service

Add code
Bookmark button
Alert button
Jan 07, 2023
Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong

Figure 1 for REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service
Figure 2 for REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service
Figure 3 for REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service
Figure 4 for REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service
Viaarxiv icon

AFLGuard: Byzantine-robust Asynchronous Federated Learning

Add code
Bookmark button
Alert button
Dec 13, 2022
Minghong Fang, Jia Liu, Neil Zhenqiang Gong, Elizabeth S. Bentley

Figure 1 for AFLGuard: Byzantine-robust Asynchronous Federated Learning
Figure 2 for AFLGuard: Byzantine-robust Asynchronous Federated Learning
Figure 3 for AFLGuard: Byzantine-robust Asynchronous Federated Learning
Figure 4 for AFLGuard: Byzantine-robust Asynchronous Federated Learning
Viaarxiv icon

Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning

Add code
Bookmark button
Alert button
Dec 06, 2022
Hongbin Liu, Wenjie Qu, Jinyuan Jia, Neil Zhenqiang Gong

Figure 1 for Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Figure 2 for Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Figure 3 for Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Figure 4 for Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Viaarxiv icon

CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning

Add code
Bookmark button
Alert button
Nov 22, 2022
Jinghuai Zhang, Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong

Figure 1 for CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning
Figure 2 for CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning
Figure 3 for CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning
Figure 4 for CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning
Viaarxiv icon

Addressing Heterogeneity in Federated Learning via Distributional Transformation

Add code
Bookmark button
Alert button
Oct 26, 2022
Haolin Yuan, Bo Hui, Yuchen Yang, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao

Viaarxiv icon

FLCert: Provably Secure Federated Learning against Poisoning Attacks

Add code
Bookmark button
Alert button
Oct 04, 2022
Xiaoyu Cao, Zaixi Zhang, Jinyuan Jia, Neil Zhenqiang Gong

Figure 1 for FLCert: Provably Secure Federated Learning against Poisoning Attacks
Figure 2 for FLCert: Provably Secure Federated Learning against Poisoning Attacks
Figure 3 for FLCert: Provably Secure Federated Learning against Poisoning Attacks
Figure 4 for FLCert: Provably Secure Federated Learning against Poisoning Attacks
Viaarxiv icon

MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples

Add code
Bookmark button
Alert button
Oct 03, 2022
Jinyuan Jia, Wenjie Qu, Neil Zhenqiang Gong

Figure 1 for MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples
Figure 2 for MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples
Figure 3 for MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples
Figure 4 for MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples
Viaarxiv icon

Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning

Add code
Bookmark button
Alert button
Jul 25, 2022
Xinlei He, Hongbin Liu, Neil Zhenqiang Gong, Yang Zhang

Figure 1 for Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning
Figure 2 for Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning
Figure 3 for Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning
Figure 4 for Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning
Viaarxiv icon