Picture for Guangdong Bai

Guangdong Bai

Text Meets Topology: Rethinking Out-of-distribution Detection in Text-Rich Networks

Add code
Aug 25, 2025
Viaarxiv icon

Detecting Manipulated Contents Using Knowledge-Grounded Inference

Add code
Apr 29, 2025
Viaarxiv icon

MAA: Meticulous Adversarial Attack against Vision-Language Pre-trained Models

Add code
Feb 12, 2025
Viaarxiv icon

GOLD: Graph Out-of-Distribution Detection via Implicit Adversarial Latent Generation

Add code
Feb 09, 2025
Viaarxiv icon

Model-Enhanced LLM-Driven VUI Testing of VPA Apps

Add code
Jul 03, 2024
Viaarxiv icon

Effective and Robust Adversarial Training against Data and Label Corruptions

Add code
May 07, 2024
Figure 1 for Effective and Robust Adversarial Training against Data and Label Corruptions
Figure 2 for Effective and Robust Adversarial Training against Data and Label Corruptions
Figure 3 for Effective and Robust Adversarial Training against Data and Label Corruptions
Figure 4 for Effective and Robust Adversarial Training against Data and Label Corruptions
Viaarxiv icon

PAODING: A High-fidelity Data-free Pruning Toolkit for Debloating Pre-trained Neural Networks

Add code
Apr 30, 2024
Figure 1 for PAODING: A High-fidelity Data-free Pruning Toolkit for Debloating Pre-trained Neural Networks
Figure 2 for PAODING: A High-fidelity Data-free Pruning Toolkit for Debloating Pre-trained Neural Networks
Viaarxiv icon

Beyond Fidelity: Explaining Vulnerability Localization of Learning-based Detectors

Add code
Jan 05, 2024
Viaarxiv icon

AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification

Add code
Nov 23, 2023
Figure 1 for AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification
Figure 2 for AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification
Figure 3 for AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification
Figure 4 for AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification
Viaarxiv icon

Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective

Add code
Jun 24, 2022
Figure 1 for Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective
Figure 2 for Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective
Figure 3 for Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective
Figure 4 for Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective
Viaarxiv icon