Alert button
Picture for Binghui Wang

Binghui Wang

Alert button

Securing GNNs: Explanation-Based Identification of Backdoored Training Graphs

Add code
Bookmark button
Alert button
Mar 26, 2024
Jane Downer, Ren Wang, Binghui Wang

Viaarxiv icon

Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks

Add code
Bookmark button
Alert button
Mar 04, 2024
Sayedeh Leila Noorbakhsh, Binghui Zhang, Yuan Hong, Binghui Wang

Figure 1 for Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Figure 2 for Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Figure 3 for Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Figure 4 for Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Viaarxiv icon

PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models

Add code
Bookmark button
Alert button
Feb 12, 2024
Wei Zou, Runpeng Geng, Binghui Wang, Jinyuan Jia

Viaarxiv icon

Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks

Add code
Bookmark button
Alert button
Jul 31, 2023
Xinyu Zhang, Hanbin Hong, Yuan Hong, Peng Huang, Binghui Wang, Zhongjie Ba, Kui Ren

Figure 1 for Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Figure 2 for Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Figure 3 for Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Figure 4 for Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Viaarxiv icon

A Certified Radius-Guided Attack Framework to Image Segmentation Models

Add code
Bookmark button
Alert button
Apr 05, 2023
Wenjie Qu, Youqi Li, Binghui Wang

Figure 1 for A Certified Radius-Guided Attack Framework to Image Segmentation Models
Figure 2 for A Certified Radius-Guided Attack Framework to Image Segmentation Models
Figure 3 for A Certified Radius-Guided Attack Framework to Image Segmentation Models
Figure 4 for A Certified Radius-Guided Attack Framework to Image Segmentation Models
Viaarxiv icon

IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients

Add code
Bookmark button
Alert button
Mar 24, 2023
Ruo Yang, Binghui Wang, Mustafa Bilgic

Figure 1 for IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients
Figure 2 for IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients
Figure 3 for IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients
Figure 4 for IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients
Viaarxiv icon

UniCR: Universally Approximated Certified Robustness via Randomized Smoothing

Add code
Bookmark button
Alert button
Jul 10, 2022
Hanbin Hong, Binghui Wang, Yuan Hong

Figure 1 for UniCR: Universally Approximated Certified Robustness via Randomized Smoothing
Figure 2 for UniCR: Universally Approximated Certified Robustness via Randomized Smoothing
Figure 3 for UniCR: Universally Approximated Certified Robustness via Randomized Smoothing
Figure 4 for UniCR: Universally Approximated Certified Robustness via Randomized Smoothing
Viaarxiv icon

NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks

Add code
Bookmark button
Alert button
Jun 11, 2022
Nuo Xu, Binghui Wang, Ran Ran, Wujie Wen, Parv Venkitasubramaniam

Figure 1 for NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks
Figure 2 for NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks
Figure 3 for NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks
Figure 4 for NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks
Viaarxiv icon

Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees

Add code
Bookmark button
Alert button
May 07, 2022
Binghui Wang, Youqi Li, Pan Zhou

Figure 1 for Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees
Figure 2 for Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees
Figure 3 for Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees
Figure 4 for Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees
Viaarxiv icon