Alert button
Picture for Deliang Fan

Deliang Fan

Alert button

Non-structured DNN Weight Pruning Considered Harmful

Add code
Bookmark button
Alert button
Jul 03, 2019
Yanzhi Wang, Shaokai Ye, Zhezhi He, Xiaolong Ma, Linfeng Zhang, Sheng Lin, Geng Yuan, Sia Huat Tan, Zhengang Li, Deliang Fan, Xuehai Qian, Xue Lin, Kaisheng Ma

Figure 1 for Non-structured DNN Weight Pruning Considered Harmful
Figure 2 for Non-structured DNN Weight Pruning Considered Harmful
Figure 3 for Non-structured DNN Weight Pruning Considered Harmful
Figure 4 for Non-structured DNN Weight Pruning Considered Harmful
Viaarxiv icon

Defending Against Adversarial Attacks Using Random Forests

Add code
Bookmark button
Alert button
Jun 16, 2019
Yifan Ding, Liqiang Wang, Huan Zhang, Jinfeng Yi, Deliang Fan, Boqing Gong

Figure 1 for Defending Against Adversarial Attacks Using Random Forests
Figure 2 for Defending Against Adversarial Attacks Using Random Forests
Figure 3 for Defending Against Adversarial Attacks Using Random Forests
Figure 4 for Defending Against Adversarial Attacks Using Random Forests
Viaarxiv icon

Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness

Add code
Bookmark button
Alert button
May 30, 2019
Adnan Siraj Rakin, Zhezhi He, Li Yang, Yanzhi Wang, Liqiang Wang, Deliang Fan

Figure 1 for Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness
Figure 2 for Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness
Figure 3 for Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness
Figure 4 for Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness
Viaarxiv icon

Processing-In-Memory Acceleration of Convolutional Neural Networks for Energy-Efficiency, and Power-Intermittency Resilience

Add code
Bookmark button
Alert button
Apr 16, 2019
Arman Roohi, Shaahin Angizi, Deliang Fan, Ronald F DeMara

Figure 1 for Processing-In-Memory Acceleration of Convolutional Neural Networks for Energy-Efficiency, and Power-Intermittency Resilience
Figure 2 for Processing-In-Memory Acceleration of Convolutional Neural Networks for Energy-Efficiency, and Power-Intermittency Resilience
Figure 3 for Processing-In-Memory Acceleration of Convolutional Neural Networks for Energy-Efficiency, and Power-Intermittency Resilience
Figure 4 for Processing-In-Memory Acceleration of Convolutional Neural Networks for Energy-Efficiency, and Power-Intermittency Resilience
Viaarxiv icon

Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search

Add code
Bookmark button
Alert button
Apr 07, 2019
Adnan Siraj Rakin, Zhezhi He, Deliang Fan

Figure 1 for Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search
Figure 2 for Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search
Figure 3 for Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search
Figure 4 for Bit-Flip Attack: Crushing Neural Network with Progressive Bit Search
Viaarxiv icon

Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack

Add code
Bookmark button
Alert button
Nov 22, 2018
Adnan Siraj Rakin, Zhezhi He, Deliang Fan

Figure 1 for Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack
Figure 2 for Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack
Figure 3 for Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack
Figure 4 for Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack
Viaarxiv icon

Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network using Truncated Gaussian Approximation

Add code
Bookmark button
Alert button
Oct 02, 2018
Zhezhi He, Deliang Fan

Figure 1 for Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network using Truncated Gaussian Approximation
Figure 2 for Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network using Truncated Gaussian Approximation
Figure 3 for Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network using Truncated Gaussian Approximation
Figure 4 for Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network using Truncated Gaussian Approximation
Viaarxiv icon

Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy

Add code
Bookmark button
Alert button
Jul 20, 2018
Zhezhi He, Boqing Gong, Deliang Fan

Figure 1 for Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy
Figure 2 for Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy
Figure 3 for Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy
Figure 4 for Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy
Viaarxiv icon

Defend Deep Neural Networks Against Adversarial Examples via Fixed andDynamic Quantized Activation Functions

Add code
Bookmark button
Alert button
Jul 18, 2018
Adnan Siraj Rakin, Jinfeng Yi, Boqing Gong, Deliang Fan

Figure 1 for Defend Deep Neural Networks Against Adversarial Examples via Fixed andDynamic Quantized Activation Functions
Figure 2 for Defend Deep Neural Networks Against Adversarial Examples via Fixed andDynamic Quantized Activation Functions
Figure 3 for Defend Deep Neural Networks Against Adversarial Examples via Fixed andDynamic Quantized Activation Functions
Figure 4 for Defend Deep Neural Networks Against Adversarial Examples via Fixed andDynamic Quantized Activation Functions
Viaarxiv icon

A Semi-Supervised Two-Stage Approach to Learning from Noisy Labels

Add code
Bookmark button
Alert button
Mar 21, 2018
Yifan Ding, Liqiang Wang, Deliang Fan, Boqing Gong

Figure 1 for A Semi-Supervised Two-Stage Approach to Learning from Noisy Labels
Figure 2 for A Semi-Supervised Two-Stage Approach to Learning from Noisy Labels
Figure 3 for A Semi-Supervised Two-Stage Approach to Learning from Noisy Labels
Figure 4 for A Semi-Supervised Two-Stage Approach to Learning from Noisy Labels
Viaarxiv icon