Picture for Yannan Liu

Yannan Liu

One Model Transfer to All: On Robust Jailbreak Prompts Generation against LLMs

Add code
May 23, 2025
Viaarxiv icon

An Empirical Study on the Efficacy of Deep Active Learning for Image Classification

Add code
Nov 30, 2022
Viaarxiv icon

TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks

Add code
May 21, 2021
Figure 1 for TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks
Figure 2 for TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks
Figure 3 for TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks
Figure 4 for TestRank: Bringing Order into Unlabeled Test Instances for Deep Learning Tasks
Viaarxiv icon

I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators

Add code
Mar 05, 2018
Figure 1 for I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators
Figure 2 for I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators
Figure 3 for I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators
Figure 4 for I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators
Viaarxiv icon

Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks

Add code
Jan 15, 2018
Figure 1 for Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks
Figure 2 for Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks
Figure 3 for Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks
Figure 4 for Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks
Viaarxiv icon