Alert button
Picture for Jinmian Ye

Jinmian Ye

Alert button

Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks

Add code
Bookmark button
Alert button
Sep 18, 2019
Zhonghui You, Kun Yan, Jinmian Ye, Meng Ma, Ping Wang

Figure 1 for Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks
Figure 2 for Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks
Figure 3 for Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks
Figure 4 for Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks
Viaarxiv icon

Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition

Add code
Bookmark button
Alert button
Nov 19, 2018
Yu Pan, Jing Xu, Maolin Wang, Jinmian Ye, Fei Wang, Kun Bai, Zenglin Xu

Figure 1 for Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition
Figure 2 for Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition
Figure 3 for Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition
Figure 4 for Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition
Viaarxiv icon

Adversarial Noise Layer: Regularize Neural Network By Adding Noise

Add code
Bookmark button
Alert button
Oct 30, 2018
Zhonghui You, Jinmian Ye, Kunming Li, Zenglin Xu, Ping Wang

Figure 1 for Adversarial Noise Layer: Regularize Neural Network By Adding Noise
Figure 2 for Adversarial Noise Layer: Regularize Neural Network By Adding Noise
Figure 3 for Adversarial Noise Layer: Regularize Neural Network By Adding Noise
Figure 4 for Adversarial Noise Layer: Regularize Neural Network By Adding Noise
Viaarxiv icon

Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition

Add code
Bookmark button
Alert button
May 11, 2018
Jinmian Ye, Linnan Wang, Guangxi Li, Di Chen, Shandian Zhe, Xinqi Chu, Zenglin Xu

Figure 1 for Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition
Figure 2 for Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition
Figure 3 for Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition
Figure 4 for Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition
Viaarxiv icon

SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks

Add code
Bookmark button
Alert button
Jan 13, 2018
Linnan Wang, Jinmian Ye, Yiyang Zhao, Wei Wu, Ang Li, Shuaiwen Leon Song, Zenglin Xu, Tim Kraska

Figure 1 for SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks
Figure 2 for SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks
Figure 3 for SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks
Figure 4 for SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks
Viaarxiv icon

BT-Nets: Simplifying Deep Neural Networks via Block Term Decomposition

Add code
Bookmark button
Alert button
Dec 15, 2017
Guangxi Li, Jinmian Ye, Haiqin Yang, Di Chen, Shuicheng Yan, Zenglin Xu

Figure 1 for BT-Nets: Simplifying Deep Neural Networks via Block Term Decomposition
Figure 2 for BT-Nets: Simplifying Deep Neural Networks via Block Term Decomposition
Figure 3 for BT-Nets: Simplifying Deep Neural Networks via Block Term Decomposition
Figure 4 for BT-Nets: Simplifying Deep Neural Networks via Block Term Decomposition
Viaarxiv icon

Simple and Efficient Parallelization for Probabilistic Temporal Tensor Factorization

Add code
Bookmark button
Alert button
Nov 11, 2016
Guangxi Li, Zenglin Xu, Linnan Wang, Jinmian Ye, Irwin King, Michael Lyu

Figure 1 for Simple and Efficient Parallelization for Probabilistic Temporal Tensor Factorization
Figure 2 for Simple and Efficient Parallelization for Probabilistic Temporal Tensor Factorization
Figure 3 for Simple and Efficient Parallelization for Probabilistic Temporal Tensor Factorization
Figure 4 for Simple and Efficient Parallelization for Probabilistic Temporal Tensor Factorization
Viaarxiv icon