Picture for Yingyan Lin

Yingyan Lin

SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation

Add code
May 08, 2020
Figure 1 for SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation
Figure 2 for SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation
Figure 3 for SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation
Figure 4 for SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation
Viaarxiv icon

TIMELY: Pushing Data Movements and Interfaces in PIM Accelerators Towards Local and in Time Domain

Add code
May 03, 2020
Figure 1 for TIMELY: Pushing Data Movements and Interfaces in PIM Accelerators Towards Local and in Time Domain
Figure 2 for TIMELY: Pushing Data Movements and Interfaces in PIM Accelerators Towards Local and in Time Domain
Figure 3 for TIMELY: Pushing Data Movements and Interfaces in PIM Accelerators Towards Local and in Time Domain
Figure 4 for TIMELY: Pushing Data Movements and Interfaces in PIM Accelerators Towards Local and in Time Domain
Viaarxiv icon

A New MRAM-based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision

Add code
Mar 02, 2020
Figure 1 for A New MRAM-based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision
Figure 2 for A New MRAM-based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision
Figure 3 for A New MRAM-based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision
Figure 4 for A New MRAM-based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision
Viaarxiv icon

DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures

Add code
Feb 26, 2020
Figure 1 for DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures
Figure 2 for DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures
Figure 3 for DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures
Figure 4 for DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures
Viaarxiv icon

AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs

Add code
Jan 06, 2020
Figure 1 for AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs
Figure 2 for AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs
Figure 3 for AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs
Figure 4 for AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs
Viaarxiv icon

Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference

Add code
Jan 03, 2020
Figure 1 for Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference
Figure 2 for Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference
Figure 3 for Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference
Figure 4 for Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference
Viaarxiv icon

E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings

Add code
Dec 06, 2019
Figure 1 for E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings
Figure 2 for E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings
Figure 3 for E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings
Figure 4 for E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings
Viaarxiv icon

Drawing early-bird tickets: Towards more efficient training of deep networks

Add code
Sep 26, 2019
Figure 1 for Drawing early-bird tickets: Towards more efficient training of deep networks
Figure 2 for Drawing early-bird tickets: Towards more efficient training of deep networks
Figure 3 for Drawing early-bird tickets: Towards more efficient training of deep networks
Figure 4 for Drawing early-bird tickets: Towards more efficient training of deep networks
Viaarxiv icon

Dual Dynamic Inference: Enabling More Efficient, Adaptive and Controllable Deep Inference

Add code
Jul 17, 2019
Figure 1 for Dual Dynamic Inference: Enabling More Efficient, Adaptive and Controllable Deep Inference
Figure 2 for Dual Dynamic Inference: Enabling More Efficient, Adaptive and Controllable Deep Inference
Figure 3 for Dual Dynamic Inference: Enabling More Efficient, Adaptive and Controllable Deep Inference
Figure 4 for Dual Dynamic Inference: Enabling More Efficient, Adaptive and Controllable Deep Inference
Viaarxiv icon

Deep $k$-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions

Add code
Jun 24, 2018
Figure 1 for Deep $k$-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions
Figure 2 for Deep $k$-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions
Figure 3 for Deep $k$-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions
Figure 4 for Deep $k$-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions
Viaarxiv icon