Alert button
Picture for Neil Zhenqiang Gong

Neil Zhenqiang Gong

Alert button

Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation

May 28, 2021
Yongji Wu, Defu Lian, Neil Zhenqiang Gong, Lu Yin, Mingyang Yin, Jingren Zhou, Hongxia Yang

Figure 1 for Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation
Figure 2 for Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation
Figure 3 for Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation
Figure 4 for Linear-Time Self Attention with Codeword Histogram for Efficient Recommendation
Viaarxiv icon

Rethinking Lifelong Sequential Recommendation with Incremental Multi-Interest Attention

May 28, 2021
Yongji Wu, Lu Yin, Defu Lian, Mingyang Yin, Neil Zhenqiang Gong, Jingren Zhou, Hongxia Yang

Figure 1 for Rethinking Lifelong Sequential Recommendation with Incremental Multi-Interest Attention
Figure 2 for Rethinking Lifelong Sequential Recommendation with Incremental Multi-Interest Attention
Figure 3 for Rethinking Lifelong Sequential Recommendation with Incremental Multi-Interest Attention
Figure 4 for Rethinking Lifelong Sequential Recommendation with Incremental Multi-Interest Attention
Viaarxiv icon

PointGuard: Provably Robust 3D Point Cloud Classification

Mar 04, 2021
Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong

Figure 1 for PointGuard: Provably Robust 3D Point Cloud Classification
Figure 2 for PointGuard: Provably Robust 3D Point Cloud Classification
Figure 3 for PointGuard: Provably Robust 3D Point Cloud Classification
Figure 4 for PointGuard: Provably Robust 3D Point Cloud Classification
Viaarxiv icon

Data Poisoning Attacks and Defenses to Crowdsourcing Systems

Feb 24, 2021
Minghong Fang, Minghao Sun, Qi Li, Neil Zhenqiang Gong, Jin Tian, Jia Liu

Figure 1 for Data Poisoning Attacks and Defenses to Crowdsourcing Systems
Figure 2 for Data Poisoning Attacks and Defenses to Crowdsourcing Systems
Figure 3 for Data Poisoning Attacks and Defenses to Crowdsourcing Systems
Figure 4 for Data Poisoning Attacks and Defenses to Crowdsourcing Systems
Viaarxiv icon

Provably Secure Federated Learning against Malicious Clients

Feb 16, 2021
Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong

Figure 1 for Provably Secure Federated Learning against Malicious Clients
Figure 2 for Provably Secure Federated Learning against Malicious Clients
Figure 3 for Provably Secure Federated Learning against Malicious Clients
Figure 4 for Provably Secure Federated Learning against Malicious Clients
Viaarxiv icon

Data Poisoning Attacks to Deep Learning Based Recommender Systems

Jan 08, 2021
Hai Huang, Jiaming Mu, Neil Zhenqiang Gong, Qi Li, Bin Liu, Mingwei Xu

Figure 1 for Data Poisoning Attacks to Deep Learning Based Recommender Systems
Figure 2 for Data Poisoning Attacks to Deep Learning Based Recommender Systems
Figure 3 for Data Poisoning Attacks to Deep Learning Based Recommender Systems
Figure 4 for Data Poisoning Attacks to Deep Learning Based Recommender Systems
Viaarxiv icon

Practical Blind Membership Inference Attack via Differential Comparisons

Jan 07, 2021
Bo Hui, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang Gong, Yinzhi Cao

Figure 1 for Practical Blind Membership Inference Attack via Differential Comparisons
Figure 2 for Practical Blind Membership Inference Attack via Differential Comparisons
Figure 3 for Practical Blind Membership Inference Attack via Differential Comparisons
Figure 4 for Practical Blind Membership Inference Attack via Differential Comparisons
Viaarxiv icon