Alert button
Picture for Binghui Wang

Binghui Wang

Alert button

A Hard Label Black-box Adversarial Attack Against Graph Neural Networks

Aug 21, 2021
Jiaming Mu, Binghui Wang, Qi Li, Kun Sun, Mingwei Xu, Zhuotao Liu

Figure 1 for A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
Figure 2 for A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
Figure 3 for A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
Figure 4 for A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
Viaarxiv icon

Privacy-Preserving Representation Learning on Graphs: A Mutual Information Perspective

Jul 03, 2021
Binghui Wang, Jiayi Guo, Ang Li, Yiran Chen, Hai Li

Figure 1 for Privacy-Preserving Representation Learning on Graphs: A Mutual Information Perspective
Figure 2 for Privacy-Preserving Representation Learning on Graphs: A Mutual Information Perspective
Figure 3 for Privacy-Preserving Representation Learning on Graphs: A Mutual Information Perspective
Figure 4 for Privacy-Preserving Representation Learning on Graphs: A Mutual Information Perspective
Viaarxiv icon

Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting

Apr 22, 2021
Qiming Wu, Zhikang Zou, Pan Zhou, Xiaoqing Ye, Binghui Wang, Ang Li

Figure 1 for Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting
Figure 2 for Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting
Figure 3 for Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting
Figure 4 for Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting
Viaarxiv icon

Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks

Dec 25, 2020
Binghui Wang, Jinyuan Jia, Neil Zhenqiang Gong

Figure 1 for Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks
Figure 2 for Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks
Figure 3 for Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks
Figure 4 for Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks
Viaarxiv icon

Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective

Dec 08, 2020
Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen

Figure 1 for Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective
Figure 2 for Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective
Figure 3 for Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective
Figure 4 for Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective
Viaarxiv icon

GraphFL: A Federated Learning Framework for Semi-Supervised Node Classification on Graphs

Dec 08, 2020
Binghui Wang, Ang Li, Hai Li, Yiran Chen

Figure 1 for GraphFL: A Federated Learning Framework for Semi-Supervised Node Classification on Graphs
Figure 2 for GraphFL: A Federated Learning Framework for Semi-Supervised Node Classification on Graphs
Figure 3 for GraphFL: A Federated Learning Framework for Semi-Supervised Node Classification on Graphs
Figure 4 for GraphFL: A Federated Learning Framework for Semi-Supervised Node Classification on Graphs
Viaarxiv icon

Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations

Nov 15, 2020
Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, Neil Zhenqiang Gong

Figure 1 for Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations
Figure 2 for Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations
Figure 3 for Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations
Figure 4 for Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations
Viaarxiv icon

Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes

Oct 26, 2020
Jinyuan Jia, Binghui Wang, Neil Zhenqiang Gong

Figure 1 for Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Figure 2 for Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Figure 3 for Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Figure 4 for Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Viaarxiv icon

Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs

Sep 12, 2020
Houxiang Fan, Binghui Wang, Pan Zhou, Ang Li, Meng Pang, Zichuan Xu, Cai Fu, Hai Li, Yiran Chen

Figure 1 for Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs
Figure 2 for Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs
Figure 3 for Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs
Figure 4 for Reinforcement Learning-based Black-Box Evasion Attacks to Link Prediction in Dynamic Graphs
Viaarxiv icon