Alert button
Picture for Matthew Mattina

Matthew Mattina

Alert button

Information contraction in noisy binary neural networks and its implications

Add code
Bookmark button
Alert button
Jan 28, 2021
Chuteng Zhou, Quntao Zhuang, Matthew Mattina, Paul N. Whatmough

Figure 1 for Information contraction in noisy binary neural networks and its implications
Figure 2 for Information contraction in noisy binary neural networks and its implications
Figure 3 for Information contraction in noisy binary neural networks and its implications
Figure 4 for Information contraction in noisy binary neural networks and its implications
Viaarxiv icon

MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers

Add code
Bookmark button
Alert button
Oct 25, 2020
Colby Banbury, Chuteng Zhou, Igor Fedorov, Ramon Matas Navarro, Urmish Thakker, Dibakar Gope, Vijay Janapa Reddi, Matthew Mattina, Paul N. Whatmough

Figure 1 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 2 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 3 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Figure 4 for MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers
Viaarxiv icon

Rank and run-time aware compression of NLP Applications

Add code
Bookmark button
Alert button
Oct 06, 2020
Urmish Thakker, Jesse Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina

Figure 1 for Rank and run-time aware compression of NLP Applications
Figure 2 for Rank and run-time aware compression of NLP Applications
Figure 3 for Rank and run-time aware compression of NLP Applications
Viaarxiv icon

Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration

Add code
Bookmark button
Alert button
Sep 04, 2020
Zhi-Gang Liu, Paul N. Whatmough, Matthew Mattina

Figure 1 for Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration
Figure 2 for Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration
Figure 3 for Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration
Figure 4 for Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration
Viaarxiv icon

High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands

Add code
Bookmark button
Alert button
Aug 03, 2020
Dibakar Gope, Jesse Beu, Matthew Mattina

Figure 1 for High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands
Figure 2 for High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands
Figure 3 for High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands
Figure 4 for High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands
Viaarxiv icon

Efficient Residue Number System Based Winograd Convolution

Add code
Bookmark button
Alert button
Jul 23, 2020
Zhi-Gang Liu, Matthew Mattina

Figure 1 for Efficient Residue Number System Based Winograd Convolution
Figure 2 for Efficient Residue Number System Based Winograd Convolution
Figure 3 for Efficient Residue Number System Based Winograd Convolution
Figure 4 for Efficient Residue Number System Based Winograd Convolution
Viaarxiv icon

TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids

Add code
Bookmark button
Alert button
May 20, 2020
Igor Fedorov, Marko Stamenovic, Carl Jensen, Li-Chia Yang, Ari Mandell, Yiming Gan, Matthew Mattina, Paul N. Whatmough

Figure 1 for TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids
Figure 2 for TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids
Figure 3 for TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids
Figure 4 for TinyLSTMs: Efficient Neural Speech Enhancement for Hearing Aids
Viaarxiv icon

Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference

Add code
Bookmark button
Alert button
May 16, 2020
Zhi-Gang Liu, Paul N. Whatmough, Matthew Mattina

Figure 1 for Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference
Figure 2 for Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference
Figure 3 for Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference
Figure 4 for Systolic Tensor Array: An Efficient Structured-Sparse GEMM Accelerator for Mobile CNN Inference
Viaarxiv icon

Searching for Winograd-aware Quantized Networks

Add code
Bookmark button
Alert button
Feb 25, 2020
Javier Fernandez-Marques, Paul N. Whatmough, Andrew Mundy, Matthew Mattina

Figure 1 for Searching for Winograd-aware Quantized Networks
Figure 2 for Searching for Winograd-aware Quantized Networks
Figure 3 for Searching for Winograd-aware Quantized Networks
Figure 4 for Searching for Winograd-aware Quantized Networks
Viaarxiv icon