Alert button
Picture for Xianglong Liu

Xianglong Liu

Alert button

RobustMQ: Benchmarking Robustness of Quantized Models

Add code
Bookmark button
Alert button
Aug 04, 2023
Yisong Xiao, Aishan Liu, Tianyuan Zhang, Haotong Qin, Jinyang Guo, Xianglong Liu

Figure 1 for RobustMQ: Benchmarking Robustness of Quantized Models
Figure 2 for RobustMQ: Benchmarking Robustness of Quantized Models
Figure 3 for RobustMQ: Benchmarking Robustness of Quantized Models
Figure 4 for RobustMQ: Benchmarking Robustness of Quantized Models
Viaarxiv icon

Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks

Add code
Bookmark button
Alert button
Aug 03, 2023
Jun Guo, Aishan Liu, Xingyu Zheng, Siyuan Liang, Yisong Xiao, Yichao Wu, Xianglong Liu

Figure 1 for Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks
Figure 2 for Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks
Figure 3 for Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks
Figure 4 for Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks
Viaarxiv icon

SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency

Add code
Bookmark button
Alert button
Jul 01, 2023
Yan Wang, Yuhang Li, Ruihao Gong, Aishan Liu, Yanfei Wang, Jian Hu, Yongqiang Yao, Yunchen Zhang, Tianzi Xiao, Fengwei Yu, Xianglong Liu

Figure 1 for SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency
Figure 2 for SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency
Figure 3 for SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency
Figure 4 for SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency
Viaarxiv icon

Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks

Add code
Bookmark button
Alert button
May 22, 2023
Simin Li, Shuing Zhang, Gujun Chen, Dong Wang, Pu Feng, Jiakai Wang, Aishan Liu, Xin Yi, Xianglong Liu

Figure 1 for Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks
Figure 2 for Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks
Figure 3 for Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks
Figure 4 for Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks
Viaarxiv icon

Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing

Add code
Bookmark button
Alert button
May 19, 2023
Yisong Xiao, Aishan Liu, Tianlin Li, Xianglong Liu

Figure 1 for Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing
Figure 2 for Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing
Figure 3 for Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing
Figure 4 for Latent Imitator: Generating Natural Individual Discriminatory Instances for Black-Box Fairness Testing
Viaarxiv icon

Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling

Add code
Bookmark button
Alert button
Apr 18, 2023
Xiuying Wei, Yunchen Zhang, Yuhang Li, Xiangguo Zhang, Ruihao Gong, Jinyang Guo, Xianglong Liu

Figure 1 for Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling
Figure 2 for Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling
Figure 3 for Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling
Figure 4 for Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling
Viaarxiv icon

Towards Accurate Post-Training Quantization for Vision Transformer

Add code
Bookmark button
Alert button
Mar 25, 2023
Yifu Ding, Haotong Qin, Qinghua Yan, Zhenhua Chai, Junjie Liu, Xiaolin Wei, Xianglong Liu

Figure 1 for Towards Accurate Post-Training Quantization for Vision Transformer
Figure 2 for Towards Accurate Post-Training Quantization for Vision Transformer
Figure 3 for Towards Accurate Post-Training Quantization for Vision Transformer
Figure 4 for Towards Accurate Post-Training Quantization for Vision Transformer
Viaarxiv icon

X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item Detection

Add code
Bookmark button
Alert button
Feb 19, 2023
Aishan Liu, Jun Guo, Jiakai Wang, Siyuan Liang, Renshuai Tao, Wenbo Zhou, Cong Liu, Xianglong Liu, Dacheng Tao

Figure 1 for X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item Detection
Figure 2 for X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item Detection
Figure 3 for X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item Detection
Figure 4 for X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item Detection
Viaarxiv icon

Attacking Cooperative Multi-Agent Reinforcement Learning by Adversarial Minority Influence

Add code
Bookmark button
Alert button
Feb 07, 2023
Simin Li, Jun Guo, Jingqiao Xiu, Pu Feng, Xin Yu, Jiakai Wang, Aishan Liu, Wenjun Wu, Xianglong Liu

Figure 1 for Attacking Cooperative Multi-Agent Reinforcement Learning by Adversarial Minority Influence
Figure 2 for Attacking Cooperative Multi-Agent Reinforcement Learning by Adversarial Minority Influence
Figure 3 for Attacking Cooperative Multi-Agent Reinforcement Learning by Adversarial Minority Influence
Figure 4 for Attacking Cooperative Multi-Agent Reinforcement Learning by Adversarial Minority Influence
Viaarxiv icon