Picture for Matthew Mattina

Matthew Mattina

Design Principles for Lifelong Learning AI Accelerators

Add code
Oct 05, 2023
Figure 1 for Design Principles for Lifelong Learning AI Accelerators
Figure 2 for Design Principles for Lifelong Learning AI Accelerators
Figure 3 for Design Principles for Lifelong Learning AI Accelerators
Figure 4 for Design Principles for Lifelong Learning AI Accelerators
Viaarxiv icon

UDC: Unified DNAS for Compressible TinyML Models

Add code
Jan 21, 2022
Figure 1 for UDC: Unified DNAS for Compressible TinyML Models
Figure 2 for UDC: Unified DNAS for Compressible TinyML Models
Figure 3 for UDC: Unified DNAS for Compressible TinyML Models
Figure 4 for UDC: Unified DNAS for Compressible TinyML Models
Viaarxiv icon

Federated Learning Based on Dynamic Regularization

Add code
Nov 09, 2021
Figure 1 for Federated Learning Based on Dynamic Regularization
Figure 2 for Federated Learning Based on Dynamic Regularization
Figure 3 for Federated Learning Based on Dynamic Regularization
Figure 4 for Federated Learning Based on Dynamic Regularization
Viaarxiv icon

Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification

Add code
Aug 13, 2021
Figure 1 for Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification
Figure 2 for Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification
Figure 3 for Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification
Figure 4 for Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification
Viaarxiv icon

S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration

Add code
Jul 16, 2021
Figure 1 for S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Figure 2 for S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Figure 3 for S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Figure 4 for S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Viaarxiv icon

On the Effects of Quantisation on Model Uncertainty in Bayesian Neural Networks

Add code
Feb 22, 2021
Figure 1 for On the Effects of Quantisation on Model Uncertainty in Bayesian Neural Networks
Figure 2 for On the Effects of Quantisation on Model Uncertainty in Bayesian Neural Networks
Figure 3 for On the Effects of Quantisation on Model Uncertainty in Bayesian Neural Networks
Figure 4 for On the Effects of Quantisation on Model Uncertainty in Bayesian Neural Networks
Viaarxiv icon

Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices

Add code
Feb 14, 2021
Figure 1 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 2 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 3 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 4 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Viaarxiv icon

Information contraction in noisy binary neural networks and its implications

Add code
Feb 01, 2021
Figure 1 for Information contraction in noisy binary neural networks and its implications
Figure 2 for Information contraction in noisy binary neural networks and its implications
Figure 3 for Information contraction in noisy binary neural networks and its implications
Figure 4 for Information contraction in noisy binary neural networks and its implications
Viaarxiv icon

MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers

Add code
Oct 25, 2020
Figure 1 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 2 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 3 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 4 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Viaarxiv icon

Rank and run-time aware compression of NLP Applications

Add code
Oct 06, 2020
Figure 1 for Rank and run-time aware compression of NLP Applications
Figure 2 for Rank and run-time aware compression of NLP Applications
Figure 3 for Rank and run-time aware compression of NLP Applications
Viaarxiv icon