Picture for Chaojian Li

Chaojian Li

Celine

FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training

Add code
Dec 24, 2020
Figure 1 for FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training
Figure 2 for FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training
Figure 3 for FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training
Figure 4 for FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training
Viaarxiv icon

DNA: Differentiable Network-Accelerator Co-Search

Add code
Oct 28, 2020
Figure 1 for DNA: Differentiable Network-Accelerator Co-Search
Figure 2 for DNA: Differentiable Network-Accelerator Co-Search
Figure 3 for DNA: Differentiable Network-Accelerator Co-Search
Figure 4 for DNA: Differentiable Network-Accelerator Co-Search
Viaarxiv icon

ShiftAddNet: A Hardware-Inspired Deep Network

Add code
Oct 24, 2020
Figure 1 for ShiftAddNet: A Hardware-Inspired Deep Network
Figure 2 for ShiftAddNet: A Hardware-Inspired Deep Network
Figure 3 for ShiftAddNet: A Hardware-Inspired Deep Network
Figure 4 for ShiftAddNet: A Hardware-Inspired Deep Network
Viaarxiv icon

SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation

Add code
May 08, 2020
Figure 1 for SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation
Figure 2 for SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation
Figure 3 for SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation
Figure 4 for SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation
Viaarxiv icon

A New MRAM-based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision

Add code
Mar 02, 2020
Figure 1 for A New MRAM-based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision
Figure 2 for A New MRAM-based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision
Figure 3 for A New MRAM-based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision
Figure 4 for A New MRAM-based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision
Viaarxiv icon

DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures

Add code
Feb 26, 2020
Figure 1 for DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures
Figure 2 for DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures
Figure 3 for DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures
Figure 4 for DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures
Viaarxiv icon

AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs

Add code
Jan 06, 2020
Figure 1 for AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs
Figure 2 for AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs
Figure 3 for AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs
Figure 4 for AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs
Viaarxiv icon

Drawing early-bird tickets: Towards more efficient training of deep networks

Add code
Sep 26, 2019
Figure 1 for Drawing early-bird tickets: Towards more efficient training of deep networks
Figure 2 for Drawing early-bird tickets: Towards more efficient training of deep networks
Figure 3 for Drawing early-bird tickets: Towards more efficient training of deep networks
Figure 4 for Drawing early-bird tickets: Towards more efficient training of deep networks
Viaarxiv icon