Alert button
Picture for Yannan Liu

Yannan Liu

Alert button

An Empirical Study on the Efficacy of Deep Active Learning for Image Classification

Nov 30, 2022
Yu Li, Muxi Chen, Yannan Liu, Daojing He, Qiang Xu

Figure 1 for An Empirical Study on the Efficacy of Deep Active Learning for Image Classification
Figure 2 for An Empirical Study on the Efficacy of Deep Active Learning for Image Classification
Figure 3 for An Empirical Study on the Efficacy of Deep Active Learning for Image Classification
Figure 4 for An Empirical Study on the Efficacy of Deep Active Learning for Image Classification
Viaarxiv icon

TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks

May 21, 2021
Yu Li, Min Li, Qiuxia Lai, Yannan Liu, Qiang Xu

Figure 1 for TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks
Figure 2 for TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks
Figure 3 for TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks
Figure 4 for TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks
Viaarxiv icon

I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators

Mar 05, 2018
Lingxiao Wei, Yannan Liu, Bo Luo, Yu Li, Qiang Xu

Figure 1 for I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators
Figure 2 for I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators
Figure 3 for I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators
Figure 4 for I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators
Viaarxiv icon

Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks

Jan 15, 2018
Bo Luo, Yannan Liu, Lingxiao Wei, Qiang Xu

Figure 1 for Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks
Figure 2 for Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks
Figure 3 for Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks
Figure 4 for Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks
Viaarxiv icon