Picture for Paul N. Whatmough

Paul N. Whatmough

AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator

Add code
Nov 10, 2021
Figure 1 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 2 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 3 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Figure 4 for AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Viaarxiv icon

Federated Learning Based on Dynamic Regularization

Add code
Nov 09, 2021
Figure 1 for Federated Learning Based on Dynamic Regularization
Figure 2 for Federated Learning Based on Dynamic Regularization
Figure 3 for Federated Learning Based on Dynamic Regularization
Figure 4 for Federated Learning Based on Dynamic Regularization
Viaarxiv icon

S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration

Add code
Jul 16, 2021
Figure 1 for S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Figure 2 for S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Figure 3 for S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Figure 4 for S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Viaarxiv icon

A LiDAR-Guided Framework for Video Enhancement

Add code
Mar 15, 2021
Figure 1 for A LiDAR-Guided Framework for Video Enhancement
Figure 2 for A LiDAR-Guided Framework for Video Enhancement
Figure 3 for A LiDAR-Guided Framework for Video Enhancement
Figure 4 for A LiDAR-Guided Framework for Video Enhancement
Viaarxiv icon

Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices

Add code
Feb 14, 2021
Figure 1 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 2 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 3 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 4 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Viaarxiv icon

Information contraction in noisy binary neural networks and its implications

Add code
Feb 01, 2021
Figure 1 for Information contraction in noisy binary neural networks and its implications
Figure 2 for Information contraction in noisy binary neural networks and its implications
Figure 3 for Information contraction in noisy binary neural networks and its implications
Figure 4 for Information contraction in noisy binary neural networks and its implications
Viaarxiv icon

MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers

Add code
Oct 25, 2020
Figure 1 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 2 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 3 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 4 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Viaarxiv icon

Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration

Add code
Sep 04, 2020
Figure 1 for Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration
Figure 2 for Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration
Figure 3 for Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration
Figure 4 for Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration
Viaarxiv icon

TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids

Add code
May 20, 2020
Figure 1 for TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids
Figure 2 for TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids
Figure 3 for TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids
Figure 4 for TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids
Viaarxiv icon

Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference

Add code
May 16, 2020
Figure 1 for Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference
Figure 2 for Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference
Figure 3 for Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference
Figure 4 for Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference
Viaarxiv icon