Alert button
Picture for Matthew Mattina

Matthew Mattina

Alert button

Run-Time Efficient RNN Compression for Inference on Edge Devices

Add code
Bookmark button
Alert button
Jun 12, 2019
Urmish Thakker, Jesse Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina

Figure 1 for Run-Time Efficient RNN Compression for Inference on Edge Devices
Figure 2 for Run-Time Efficient RNN Compression for Inference on Edge Devices
Figure 3 for Run-Time Efficient RNN Compression for Inference on Edge Devices
Figure 4 for Run-Time Efficient RNN Compression for Inference on Edge Devices
Viaarxiv icon

SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers

Add code
Bookmark button
Alert button
May 28, 2019
Igor Fedorov, Ryan P. Adams, Matthew Mattina, Paul N. Whatmough

Figure 1 for SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers
Figure 2 for SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers
Figure 3 for SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers
Figure 4 for SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers
Viaarxiv icon

Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications

Add code
Bookmark button
Alert button
Mar 04, 2019
Dibakar Gope, Ganesh Dasika, Matthew Mattina

Figure 1 for Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications
Figure 2 for Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications
Figure 3 for Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications
Figure 4 for Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications
Viaarxiv icon

Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs

Add code
Bookmark button
Alert button
Mar 04, 2019
Partha Maji, Andrew Mundy, Ganesh Dasika, Jesse Beu, Matthew Mattina, Robert Mullins

Figure 1 for Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs
Figure 2 for Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs
Figure 3 for Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs
Figure 4 for Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs
Viaarxiv icon

Learning low-precision neural networks without Straight-Through Estimator(STE)

Add code
Bookmark button
Alert button
Mar 04, 2019
Zhi-Gang Liu, Matthew Mattina

Figure 1 for Learning low-precision neural networks without Straight-Through Estimator(STE)
Figure 2 for Learning low-precision neural networks without Straight-Through Estimator(STE)
Figure 3 for Learning low-precision neural networks without Straight-Through Estimator(STE)
Figure 4 for Learning low-precision neural networks without Straight-Through Estimator(STE)
Viaarxiv icon

FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning

Add code
Bookmark button
Alert button
Feb 27, 2019
Paul N. Whatmough, Chuteng Zhou, Patrick Hansen, Shreyas Kolala Venkataramanaiah, Jae-sun Seo, Matthew Mattina

Figure 1 for FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning
Figure 2 for FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning
Figure 3 for FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning
Figure 4 for FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning
Viaarxiv icon

Efficient and Robust Machine Learning for Real-World Systems

Add code
Bookmark button
Alert button
Dec 05, 2018
Franz Pernkopf, Wolfgang Roth, Matthias Zoehrer, Lukas Pfeifenberger, Guenther Schindler, Holger Froening, Sebastian Tschiatschek, Robert Peharz, Matthew Mattina, Zoubin Ghahramani

Figure 1 for Efficient and Robust Machine Learning for Real-World Systems
Figure 2 for Efficient and Robust Machine Learning for Real-World Systems
Figure 3 for Efficient and Robust Machine Learning for Real-World Systems
Figure 4 for Efficient and Robust Machine Learning for Real-World Systems
Viaarxiv icon

Energy Efficient Hardware for On-Device CNN Inference via Transfer Learning

Add code
Bookmark button
Alert button
Dec 04, 2018
Paul Whatmough, Chuteng Zhou, Patrick Hansen, Matthew Mattina

Figure 1 for Energy Efficient Hardware for On-Device CNN Inference via Transfer Learning
Figure 2 for Energy Efficient Hardware for On-Device CNN Inference via Transfer Learning
Figure 3 for Energy Efficient Hardware for On-Device CNN Inference via Transfer Learning
Viaarxiv icon

Euphrates: Algorithm-SoC Co-Design for Low-Power Mobile Continuous Vision

Add code
Bookmark button
Alert button
Mar 29, 2018
Yuhao Zhu, Anand Samajdar, Matthew Mattina, Paul Whatmough

Figure 1 for Euphrates: Algorithm-SoC Co-Design for Low-Power Mobile Continuous Vision
Figure 2 for Euphrates: Algorithm-SoC Co-Design for Low-Power Mobile Continuous Vision
Figure 3 for Euphrates: Algorithm-SoC Co-Design for Low-Power Mobile Continuous Vision
Figure 4 for Euphrates: Algorithm-SoC Co-Design for Low-Power Mobile Continuous Vision
Viaarxiv icon

Mobile Machine Learning Hardware at ARM: A Systems-on-Chip (SoC) Perspective

Add code
Bookmark button
Alert button
Feb 01, 2018
Yuhao Zhu, Matthew Mattina, Paul Whatmough

Figure 1 for Mobile Machine Learning Hardware at ARM: A Systems-on-Chip (SoC) Perspective
Figure 2 for Mobile Machine Learning Hardware at ARM: A Systems-on-Chip (SoC) Perspective
Viaarxiv icon