Alert button
Picture for Aishan Liu

Aishan Liu

Alert button

Face Encryption via Frequency-Restricted Identity-Agnostic Attacks

Add code
Bookmark button
Alert button
Aug 25, 2023
Xin Dong, Rui Wang, Siyuan Liang, Aishan Liu, Lihua Jing

Figure 1 for Face Encryption via Frequency-Restricted Identity-Agnostic Attacks
Figure 2 for Face Encryption via Frequency-Restricted Identity-Agnostic Attacks
Figure 3 for Face Encryption via Frequency-Restricted Identity-Agnostic Attacks
Figure 4 for Face Encryption via Frequency-Restricted Identity-Agnostic Attacks
Viaarxiv icon

RobustMQ: Benchmarking Robustness of Quantized Models

Add code
Bookmark button
Alert button
Aug 04, 2023
Yisong Xiao, Aishan Liu, Tianyuan Zhang, Haotong Qin, Jinyang Guo, Xianglong Liu

Figure 1 for RobustMQ: Benchmarking Robustness of Quantized Models
Figure 2 for RobustMQ: Benchmarking Robustness of Quantized Models
Figure 3 for RobustMQ: Benchmarking Robustness of Quantized Models
Figure 4 for RobustMQ: Benchmarking Robustness of Quantized Models
Viaarxiv icon

Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks

Add code
Bookmark button
Alert button
Aug 03, 2023
Jun Guo, Aishan Liu, Xingyu Zheng, Siyuan Liang, Yisong Xiao, Yichao Wu, Xianglong Liu

Figure 1 for Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks
Figure 2 for Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks
Figure 3 for Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks
Figure 4 for Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks
Viaarxiv icon

SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency

Add code
Bookmark button
Alert button
Jul 01, 2023
Yan Wang, Yuhang Li, Ruihao Gong, Aishan Liu, Yanfei Wang, Jian Hu, Yongqiang Yao, Yunchen Zhang, Tianzi Xiao, Fengwei Yu, Xianglong Liu

Figure 1 for SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency
Figure 2 for SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency
Figure 3 for SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency
Figure 4 for SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency
Viaarxiv icon

FAIRER: Fairness as Decision Rationale Alignment

Add code
Bookmark button
Alert button
Jun 27, 2023
Tianlin Li, Qing Guo, Aishan Liu, Mengnan Du, Zhiming Li, Yang Liu

Figure 1 for FAIRER: Fairness as Decision Rationale Alignment
Figure 2 for FAIRER: Fairness as Decision Rationale Alignment
Figure 3 for FAIRER: Fairness as Decision Rationale Alignment
Figure 4 for FAIRER: Fairness as Decision Rationale Alignment
Viaarxiv icon

Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks

Add code
Bookmark button
Alert button
May 22, 2023
Simin Li, Shuing Zhang, Gujun Chen, Dong Wang, Pu Feng, Jiakai Wang, Aishan Liu, Xin Yi, Xianglong Liu

Figure 1 for Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks
Figure 2 for Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks
Figure 3 for Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks
Figure 4 for Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks
Viaarxiv icon

Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing

Add code
Bookmark button
Alert button
May 19, 2023
Yisong Xiao, Aishan Liu, Tianlin Li, Xianglong Liu

Figure 1 for Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing
Figure 2 for Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing
Figure 3 for Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing
Figure 4 for Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing
Viaarxiv icon