Picture for Liang Tong

Liang Tong

Hierarchical Graph Neural Networks for Causal Discovery and Root Cause Localization

Add code
Feb 03, 2023
Figure 1 for Hierarchical Graph Neural Networks for Causal Discovery and Root Cause Localization
Figure 2 for Hierarchical Graph Neural Networks for Causal Discovery and Root Cause Localization
Figure 3 for Hierarchical Graph Neural Networks for Causal Discovery and Root Cause Localization
Figure 4 for Hierarchical Graph Neural Networks for Causal Discovery and Root Cause Localization
Viaarxiv icon

Personalized Federated Learning via Heterogeneous Modular Networks

Add code
Oct 26, 2022
Viaarxiv icon

FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification

Add code
Oct 25, 2022
Figure 1 for FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification
Figure 2 for FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification
Figure 3 for FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification
Figure 4 for FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification
Viaarxiv icon

FACESEC: A Fine-grained Robustness Evaluation Framework for Face Recognition Systems

Add code
Apr 08, 2021
Figure 1 for FACESEC: A Fine-grained Robustness Evaluation Framework for Face Recognition Systems
Figure 2 for FACESEC: A Fine-grained Robustness Evaluation Framework for Face Recognition Systems
Figure 3 for FACESEC: A Fine-grained Robustness Evaluation Framework for Face Recognition Systems
Figure 4 for FACESEC: A Fine-grained Robustness Evaluation Framework for Face Recognition Systems
Viaarxiv icon

Towards Robustness against Unsuspicious Adversarial Examples

Add code
May 08, 2020
Figure 1 for Towards Robustness against Unsuspicious Adversarial Examples
Figure 2 for Towards Robustness against Unsuspicious Adversarial Examples
Figure 3 for Towards Robustness against Unsuspicious Adversarial Examples
Figure 4 for Towards Robustness against Unsuspicious Adversarial Examples
Viaarxiv icon

Defending Against Physically Realizable Attacks on Image Classification

Add code
Sep 20, 2019
Figure 1 for Defending Against Physically Realizable Attacks on Image Classification
Figure 2 for Defending Against Physically Realizable Attacks on Image Classification
Figure 3 for Defending Against Physically Realizable Attacks on Image Classification
Figure 4 for Defending Against Physically Realizable Attacks on Image Classification
Viaarxiv icon

Finding Needles in a Moving Haystack: Prioritizing Alerts with Adversarial Reinforcement Learning

Add code
Jun 20, 2019
Figure 1 for Finding Needles in a Moving Haystack: Prioritizing Alerts with Adversarial Reinforcement Learning
Figure 2 for Finding Needles in a Moving Haystack: Prioritizing Alerts with Adversarial Reinforcement Learning
Figure 3 for Finding Needles in a Moving Haystack: Prioritizing Alerts with Adversarial Reinforcement Learning
Figure 4 for Finding Needles in a Moving Haystack: Prioritizing Alerts with Adversarial Reinforcement Learning
Viaarxiv icon

A Framework for Validating Models of Evasion Attacks on Machine Learning, with Application to Malware Detection

Add code
Jun 13, 2018
Figure 1 for A Framework for Validating Models of Evasion Attacks on Machine Learning, with Application to Malware Detection
Figure 2 for A Framework for Validating Models of Evasion Attacks on Machine Learning, with Application to Malware Detection
Figure 3 for A Framework for Validating Models of Evasion Attacks on Machine Learning, with Application to Malware Detection
Figure 4 for A Framework for Validating Models of Evasion Attacks on Machine Learning, with Application to Malware Detection
Viaarxiv icon

Adversarial Regression with Multiple Learners

Add code
Jun 06, 2018
Figure 1 for Adversarial Regression with Multiple Learners
Figure 2 for Adversarial Regression with Multiple Learners
Figure 3 for Adversarial Regression with Multiple Learners
Figure 4 for Adversarial Regression with Multiple Learners
Viaarxiv icon