Alert button
Picture for Fuxun Yu

Fuxun Yu

Alert button

Interpreting Adversarial Robustness: A View from Decision Surface in Input Space

Add code
Bookmark button
Alert button
Oct 12, 2018
Fuxun Yu, Chenchen Liu, Yanzhi Wang, Liang Zhao, Xiang Chen

Figure 1 for Interpreting Adversarial Robustness: A View from Decision Surface in Input Space
Figure 2 for Interpreting Adversarial Robustness: A View from Decision Surface in Input Space
Figure 3 for Interpreting Adversarial Robustness: A View from Decision Surface in Input Space
Figure 4 for Interpreting Adversarial Robustness: A View from Decision Surface in Input Space
Viaarxiv icon

Interpretable Convolutional Filter Pruning

Add code
Bookmark button
Alert button
Oct 12, 2018
Zhuwei Qin, Fuxun Yu, Chenchen Liu, Liang Zhao, Xiang Chen

Figure 1 for Interpretable Convolutional Filter Pruning
Figure 2 for Interpretable Convolutional Filter Pruning
Figure 3 for Interpretable Convolutional Filter Pruning
Figure 4 for Interpretable Convolutional Filter Pruning
Viaarxiv icon

HASP: A High-Performance Adaptive Mobile Security Enhancement Against Malicious Speech Recognition

Add code
Bookmark button
Alert button
Sep 04, 2018
Zirui Xu, Fuxun Yu, Chenchen Liu, Xiang Chen

Figure 1 for HASP: A High-Performance Adaptive Mobile Security Enhancement Against Malicious Speech Recognition
Figure 2 for HASP: A High-Performance Adaptive Mobile Security Enhancement Against Malicious Speech Recognition
Figure 3 for HASP: A High-Performance Adaptive Mobile Security Enhancement Against Malicious Speech Recognition
Figure 4 for HASP: A High-Performance Adaptive Mobile Security Enhancement Against Malicious Speech Recognition
Viaarxiv icon

ASP:A Fast Adversarial Attack Example Generation Framework based on Adversarial Saliency Prediction

Add code
Bookmark button
Alert button
Jun 12, 2018
Fuxun Yu, Qide Dong, Xiang Chen

Figure 1 for ASP:A Fast Adversarial Attack Example Generation Framework based on Adversarial Saliency Prediction
Figure 2 for ASP:A Fast Adversarial Attack Example Generation Framework based on Adversarial Saliency Prediction
Figure 3 for ASP:A Fast Adversarial Attack Example Generation Framework based on Adversarial Saliency Prediction
Figure 4 for ASP:A Fast Adversarial Attack Example Generation Framework based on Adversarial Saliency Prediction
Viaarxiv icon

Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients

Add code
Bookmark button
Alert button
Jun 07, 2018
Fuxun Yu, Zirui Xu, Yanzhi Wang, Chenchen Liu, Xiang Chen

Figure 1 for Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients
Figure 2 for Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients
Figure 3 for Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients
Figure 4 for Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients
Viaarxiv icon

How convolutional neural network see the world - A survey of convolutional neural network visualization methods

Add code
Bookmark button
Alert button
May 31, 2018
Zhuwei Qin, Fuxun Yu, Chenchen Liu, Xiang Chen

Figure 1 for How convolutional neural network see the world - A survey of convolutional neural network visualization methods
Figure 2 for How convolutional neural network see the world - A survey of convolutional neural network visualization methods
Figure 3 for How convolutional neural network see the world - A survey of convolutional neural network visualization methods
Figure 4 for How convolutional neural network see the world - A survey of convolutional neural network visualization methods
Viaarxiv icon