Picture for Binghui Wang

Binghui Wang

Understanding Data Reconstruction Leakage in Federated Learning from a Theoretical Perspective

Add code
Aug 22, 2024
Viaarxiv icon

Universally Harmonizing Differential Privacy Mechanisms for Federated Learning: Boosting Accuracy and Convergence

Add code
Jul 24, 2024
Viaarxiv icon

Graph Neural Network Causal Explanation via Neural Causal Models

Add code
Jul 12, 2024
Viaarxiv icon

Graph Neural Network Explanations are Fragile

Add code
Jun 05, 2024
Figure 1 for Graph Neural Network Explanations are Fragile
Figure 2 for Graph Neural Network Explanations are Fragile
Figure 3 for Graph Neural Network Explanations are Fragile
Figure 4 for Graph Neural Network Explanations are Fragile
Viaarxiv icon

Securing GNNs: Explanation-Based Identification of Backdoored Training Graphs

Add code
Mar 26, 2024
Figure 1 for Securing GNNs: Explanation-Based Identification of Backdoored Training Graphs
Figure 2 for Securing GNNs: Explanation-Based Identification of Backdoored Training Graphs
Figure 3 for Securing GNNs: Explanation-Based Identification of Backdoored Training Graphs
Figure 4 for Securing GNNs: Explanation-Based Identification of Backdoored Training Graphs
Viaarxiv icon

Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks

Add code
Mar 04, 2024
Figure 1 for Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Figure 2 for Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Figure 3 for Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Figure 4 for Inf2Guard: An Information-Theoretic Framework for Learning Privacy-Preserving Representations against Inference Attacks
Viaarxiv icon

PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models

Add code
Feb 12, 2024
Figure 1 for PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models
Figure 2 for PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models
Figure 3 for PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models
Figure 4 for PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models
Viaarxiv icon

Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks

Add code
Jul 31, 2023
Figure 1 for Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Figure 2 for Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Figure 3 for Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Figure 4 for Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Viaarxiv icon

A Certified Radius-Guided Attack Framework to Image Segmentation Models

Add code
Apr 05, 2023
Figure 1 for A Certified Radius-Guided Attack Framework to Image Segmentation Models
Figure 2 for A Certified Radius-Guided Attack Framework to Image Segmentation Models
Figure 3 for A Certified Radius-Guided Attack Framework to Image Segmentation Models
Figure 4 for A Certified Radius-Guided Attack Framework to Image Segmentation Models
Viaarxiv icon

IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients

Add code
Mar 24, 2023
Figure 1 for IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients
Figure 2 for IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients
Figure 3 for IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients
Figure 4 for IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients
Viaarxiv icon