Picture for Suyog Gupta

Suyog Gupta

Discovering Multi-Hardware Mobile Models via Architecture Search

Add code
Aug 18, 2020
Figure 1 for Discovering Multi-Hardware Mobile Models via Architecture Search
Figure 2 for Discovering Multi-Hardware Mobile Models via Architecture Search
Figure 3 for Discovering Multi-Hardware Mobile Models via Architecture Search
Figure 4 for Discovering Multi-Hardware Mobile Models via Architecture Search
Viaarxiv icon

MobileDets: Searching for Object Detection Architectures for Mobile Accelerators

Add code
Apr 30, 2020
Figure 1 for MobileDets: Searching for Object Detection Architectures for Mobile Accelerators
Figure 2 for MobileDets: Searching for Object Detection Architectures for Mobile Accelerators
Figure 3 for MobileDets: Searching for Object Detection Architectures for Mobile Accelerators
Figure 4 for MobileDets: Searching for Object Detection Architectures for Mobile Accelerators
Viaarxiv icon

Accelerator-aware Neural Network Design using AutoML

Add code
Mar 05, 2020
Figure 1 for Accelerator-aware Neural Network Design using AutoML
Figure 2 for Accelerator-aware Neural Network Design using AutoML
Figure 3 for Accelerator-aware Neural Network Design using AutoML
Figure 4 for Accelerator-aware Neural Network Design using AutoML
Viaarxiv icon

Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling

Add code
Feb 21, 2019
Figure 1 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Figure 2 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Figure 3 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Viaarxiv icon

To prune, or not to prune: exploring the efficacy of pruning for model compression

Add code
Nov 13, 2017
Figure 1 for To prune, or not to prune: exploring the efficacy of pruning for model compression
Figure 2 for To prune, or not to prune: exploring the efficacy of pruning for model compression
Figure 3 for To prune, or not to prune: exploring the efficacy of pruning for model compression
Figure 4 for To prune, or not to prune: exploring the efficacy of pruning for model compression
Viaarxiv icon

Model Accuracy and Runtime Tradeoff in Distributed Deep Learning:A Systematic Study

Add code
Dec 05, 2016
Figure 1 for Model Accuracy and Runtime Tradeoff in Distributed Deep Learning:A Systematic Study
Figure 2 for Model Accuracy and Runtime Tradeoff in Distributed Deep Learning:A Systematic Study
Figure 3 for Model Accuracy and Runtime Tradeoff in Distributed Deep Learning:A Systematic Study
Figure 4 for Model Accuracy and Runtime Tradeoff in Distributed Deep Learning:A Systematic Study
Viaarxiv icon

Staleness-aware Async-SGD for Distributed Deep Learning

Add code
Apr 05, 2016
Figure 1 for Staleness-aware Async-SGD for Distributed Deep Learning
Figure 2 for Staleness-aware Async-SGD for Distributed Deep Learning
Figure 3 for Staleness-aware Async-SGD for Distributed Deep Learning
Figure 4 for Staleness-aware Async-SGD for Distributed Deep Learning
Viaarxiv icon

Deep Learning with Limited Numerical Precision

Add code
Feb 09, 2015
Figure 1 for Deep Learning with Limited Numerical Precision
Figure 2 for Deep Learning with Limited Numerical Precision
Figure 3 for Deep Learning with Limited Numerical Precision
Figure 4 for Deep Learning with Limited Numerical Precision
Viaarxiv icon

Learning Machines Implemented on Non-Deterministic Hardware

Add code
Sep 09, 2014
Figure 1 for Learning Machines Implemented on Non-Deterministic Hardware
Figure 2 for Learning Machines Implemented on Non-Deterministic Hardware
Figure 3 for Learning Machines Implemented on Non-Deterministic Hardware
Figure 4 for Learning Machines Implemented on Non-Deterministic Hardware
Viaarxiv icon