Alert button
Picture for Matthew Mattina

Matthew Mattina

Alert button

Design Principles for Lifelong Learning AI Accelerators

Add code
Bookmark button
Alert button
Oct 05, 2023
Dhireesha Kudithipudi, Anurag Daram, Abdullah M. Zyarah, Fatima Tuz Zohora, James B. Aimone, Angel Yanguas-Gil, Nicholas Soures, Emre Neftci, Matthew Mattina, Vincenzo Lomonaco, Clare D. Thiem, Benjamin Epstein

Figure 1 for Design Principles for Lifelong Learning AI Accelerators
Figure 2 for Design Principles for Lifelong Learning AI Accelerators
Figure 3 for Design Principles for Lifelong Learning AI Accelerators
Figure 4 for Design Principles for Lifelong Learning AI Accelerators
Viaarxiv icon

UDC: Unified DNAS for Compressible TinyML Models

Add code
Bookmark button
Alert button
Jan 21, 2022
Igor Fedorov, Ramon Matas, Hokchhay Tann, Chuteng Zhou, Matthew Mattina, Paul Whatmough

Figure 1 for UDC: Unified DNAS for Compressible TinyML Models
Figure 2 for UDC: Unified DNAS for Compressible TinyML Models
Figure 3 for UDC: Unified DNAS for Compressible TinyML Models
Figure 4 for UDC: Unified DNAS for Compressible TinyML Models
Viaarxiv icon

Federated Learning Based on Dynamic Regularization

Add code
Bookmark button
Alert button
Nov 09, 2021
Durmus Alp Emre Acar, Yue Zhao, Ramon Matas Navarro, Matthew Mattina, Paul N. Whatmough, Venkatesh Saligrama

Figure 1 for Federated Learning Based on Dynamic Regularization
Figure 2 for Federated Learning Based on Dynamic Regularization
Figure 3 for Federated Learning Based on Dynamic Regularization
Figure 4 for Federated Learning Based on Dynamic Regularization
Viaarxiv icon

Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification

Add code
Bookmark button
Alert button
Aug 13, 2021
Shyam A. Tailor, René de Jong, Tiago Azevedo, Matthew Mattina, Partha Maji

Figure 1 for Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification
Figure 2 for Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification
Figure 3 for Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification
Figure 4 for Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification
Viaarxiv icon

S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration

Add code
Bookmark button
Alert button
Jul 16, 2021
Zhi-Gang Liu, Paul N. Whatmough, Yuhao Zhu, Matthew Mattina

Figure 1 for S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Figure 2 for S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Figure 3 for S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Figure 4 for S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration
Viaarxiv icon

On the Effects of Quantisation on Model Uncertainty in Bayesian Neural Networks

Add code
Bookmark button
Alert button
Feb 22, 2021
Martin Ferianc, Partha Maji, Matthew Mattina, Miguel Rodrigues

Figure 1 for On the Effects of Quantisation on Model Uncertainty in Bayesian Neural Networks
Figure 2 for On the Effects of Quantisation on Model Uncertainty in Bayesian Neural Networks
Figure 3 for On the Effects of Quantisation on Model Uncertainty in Bayesian Neural Networks
Figure 4 for On the Effects of Quantisation on Model Uncertainty in Bayesian Neural Networks
Viaarxiv icon

Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices

Add code
Bookmark button
Alert button
Feb 14, 2021
Urmish Thakker, Paul N. Whatmough, Zhigang Liu, Matthew Mattina, Jesse Beu

Figure 1 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 2 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 3 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 4 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Viaarxiv icon

Information contraction in noisy binary neural networks and its implications

Add code
Bookmark button
Alert button
Feb 01, 2021
Chuteng Zhou, Quntao Zhuang, Matthew Mattina, Paul N. Whatmough

Figure 1 for Information contraction in noisy binary neural networks and its implications
Figure 2 for Information contraction in noisy binary neural networks and its implications
Figure 3 for Information contraction in noisy binary neural networks and its implications
Figure 4 for Information contraction in noisy binary neural networks and its implications
Viaarxiv icon