Picture for Binghui Wang

Binghui Wang

IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients

Add code
Mar 24, 2023
Figure 1 for IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients
Figure 2 for IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients
Figure 3 for IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients
Figure 4 for IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients
Viaarxiv icon

UniCR: Universally Approximated Certified Robustness via Randomized Smoothing

Add code
Jul 10, 2022
Figure 1 for UniCR: Universally Approximated Certified Robustness via Randomized Smoothing
Figure 2 for UniCR: Universally Approximated Certified Robustness via Randomized Smoothing
Figure 3 for UniCR: Universally Approximated Certified Robustness via Randomized Smoothing
Figure 4 for UniCR: Universally Approximated Certified Robustness via Randomized Smoothing
Viaarxiv icon

NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks

Add code
Jun 11, 2022
Figure 1 for NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks
Figure 2 for NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks
Figure 3 for NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks
Figure 4 for NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks
Viaarxiv icon

Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees

Add code
May 07, 2022
Figure 1 for Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees
Figure 2 for Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees
Figure 3 for Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees
Figure 4 for Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees
Viaarxiv icon

Detecting Gender Bias in Transformer-based Models: A Case Study on BERT

Add code
Oct 15, 2021
Figure 1 for Detecting Gender Bias in Transformer-based Models: A Case Study on BERT
Figure 2 for Detecting Gender Bias in Transformer-based Models: A Case Study on BERT
Figure 3 for Detecting Gender Bias in Transformer-based Models: A Case Study on BERT
Figure 4 for Detecting Gender Bias in Transformer-based Models: A Case Study on BERT
Viaarxiv icon

A Hard Label Black-box Adversarial Attack Against Graph Neural Networks

Add code
Aug 21, 2021
Figure 1 for A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
Figure 2 for A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
Figure 3 for A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
Figure 4 for A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
Viaarxiv icon

Privacy-Preserving Representation Learning on Graphs: A Mutual Information Perspective

Add code
Jul 03, 2021
Figure 1 for Privacy-Preserving Representation Learning on Graphs: A Mutual Information Perspective
Figure 2 for Privacy-Preserving Representation Learning on Graphs: A Mutual Information Perspective
Figure 3 for Privacy-Preserving Representation Learning on Graphs: A Mutual Information Perspective
Figure 4 for Privacy-Preserving Representation Learning on Graphs: A Mutual Information Perspective
Viaarxiv icon

Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting

Add code
Apr 22, 2021
Figure 1 for Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting
Figure 2 for Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting
Figure 3 for Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting
Figure 4 for Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting
Viaarxiv icon

Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks

Add code
Dec 25, 2020
Figure 1 for Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks
Figure 2 for Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks
Figure 3 for Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks
Figure 4 for Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks
Viaarxiv icon

Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective

Add code
Dec 08, 2020
Figure 1 for Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective
Figure 2 for Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective
Figure 3 for Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective
Figure 4 for Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective
Viaarxiv icon