Alert button
Picture for Yingyan Lin

Yingyan Lin

Alert button

E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings

Add code
Bookmark button
Alert button
Dec 06, 2019
Yue Wang, Ziyu Jiang, Xiaohan Chen, Pengfei Xu, Yang Zhao, Yingyan Lin, Zhangyang Wang

Figure 1 for E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings
Figure 2 for E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings
Figure 3 for E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings
Figure 4 for E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings
Viaarxiv icon

E2-Train: Training State-of-the-art CNNs with Over 80% Less Energy

Add code
Bookmark button
Alert button
Nov 06, 2019
Yue Wang, Ziyu Jiang, Xiaohan Chen, Pengfei Xu, Yang Zhao, Yingyan Lin, Zhangyang Wang

Figure 1 for E2-Train: Training State-of-the-art CNNs with Over 80% Less Energy
Figure 2 for E2-Train: Training State-of-the-art CNNs with Over 80% Less Energy
Figure 3 for E2-Train: Training State-of-the-art CNNs with Over 80% Less Energy
Figure 4 for E2-Train: Training State-of-the-art CNNs with Over 80% Less Energy
Viaarxiv icon

E2-Train: Energy-Efficient Deep Network Training with Data-, Model-, and Algorithm-Level Saving

Add code
Bookmark button
Alert button
Oct 29, 2019
Yue Wang, Ziyu Jiang, Xiaohan Chen, Pengfei Xu, Yang Zhao, Yingyan Lin, Zhangyang Wang

Figure 1 for E2-Train: Energy-Efficient Deep Network Training with Data-, Model-, and Algorithm-Level Saving
Figure 2 for E2-Train: Energy-Efficient Deep Network Training with Data-, Model-, and Algorithm-Level Saving
Figure 3 for E2-Train: Energy-Efficient Deep Network Training with Data-, Model-, and Algorithm-Level Saving
Figure 4 for E2-Train: Energy-Efficient Deep Network Training with Data-, Model-, and Algorithm-Level Saving
Viaarxiv icon

Drawing early-bird tickets: Towards more efficient training of deep networks

Add code
Bookmark button
Alert button
Sep 26, 2019
Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Yingyan Lin, Zhangyang Wang, Richard G. Baraniuk

Figure 1 for Drawing early-bird tickets: Towards more efficient training of deep networks
Figure 2 for Drawing early-bird tickets: Towards more efficient training of deep networks
Figure 3 for Drawing early-bird tickets: Towards more efficient training of deep networks
Figure 4 for Drawing early-bird tickets: Towards more efficient training of deep networks
Viaarxiv icon

Dual Dynamic Inference: Enabling More Efficient, Adaptive and Controllable Deep Inference

Add code
Bookmark button
Alert button
Jul 17, 2019
Yue Wang, Jianghao Shen, Ting-Kuei Hu, Pengfei Xu, Tan Nguyen, Richard Baraniuk, Zhangyang Wang, Yingyan Lin

Figure 1 for Dual Dynamic Inference: Enabling More Efficient, Adaptive and Controllable Deep Inference
Figure 2 for Dual Dynamic Inference: Enabling More Efficient, Adaptive and Controllable Deep Inference
Figure 3 for Dual Dynamic Inference: Enabling More Efficient, Adaptive and Controllable Deep Inference
Figure 4 for Dual Dynamic Inference: Enabling More Efficient, Adaptive and Controllable Deep Inference
Viaarxiv icon

Deep $k$-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions

Add code
Bookmark button
Alert button
Jun 24, 2018
Junru Wu, Yue Wang, Zhenyu Wu, Zhangyang Wang, Ashok Veeraraghavan, Yingyan Lin

Figure 1 for Deep $k$-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions
Figure 2 for Deep $k$-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions
Figure 3 for Deep $k$-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions
Figure 4 for Deep $k$-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions
Viaarxiv icon