Picture for Matthew Mattina

Matthew Mattina

Pushing the limits of RNN Compression

Add code
Oct 09, 2019
Figure 1 for Pushing the limits of RNN Compression
Viaarxiv icon

Compressing RNNs for IoT devices by 15-38x using Kronecker Products

Add code
Jun 18, 2019
Figure 1 for Compressing RNNs for IoT devices by 15-38x using Kronecker Products
Figure 2 for Compressing RNNs for IoT devices by 15-38x using Kronecker Products
Figure 3 for Compressing RNNs for IoT devices by 15-38x using Kronecker Products
Figure 4 for Compressing RNNs for IoT devices by 15-38x using Kronecker Products
Viaarxiv icon

Run-Time Efficient RNN Compression for Inference on Edge Devices

Add code
Jun 18, 2019
Figure 1 for Run-Time Efficient RNN Compression for Inference on Edge Devices
Figure 2 for Run-Time Efficient RNN Compression for Inference on Edge Devices
Figure 3 for Run-Time Efficient RNN Compression for Inference on Edge Devices
Figure 4 for Run-Time Efficient RNN Compression for Inference on Edge Devices
Viaarxiv icon

SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers

Add code
May 28, 2019
Figure 1 for SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers
Figure 2 for SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers
Figure 3 for SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers
Figure 4 for SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers
Viaarxiv icon

Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications

Add code
Mar 04, 2019
Figure 1 for Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications
Figure 2 for Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications
Figure 3 for Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications
Figure 4 for Ternary Hybrid Neural-Tree Networks for Highly Constrained IoT Applications
Viaarxiv icon

Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs

Add code
Mar 04, 2019
Figure 1 for Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs
Figure 2 for Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs
Figure 3 for Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs
Figure 4 for Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs
Viaarxiv icon

Learning low-precision neural networks without Straight-Through Estimator(STE)

Add code
Mar 04, 2019
Figure 1 for Learning low-precision neural networks without Straight-Through Estimator(STE)
Figure 2 for Learning low-precision neural networks without Straight-Through Estimator(STE)
Figure 3 for Learning low-precision neural networks without Straight-Through Estimator(STE)
Figure 4 for Learning low-precision neural networks without Straight-Through Estimator(STE)
Viaarxiv icon

FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning

Add code
Feb 27, 2019
Figure 1 for FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning
Figure 2 for FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning
Figure 3 for FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning
Figure 4 for FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning
Viaarxiv icon

Efficient and Robust Machine Learning for Real-World Systems

Add code
Dec 05, 2018
Figure 1 for Efficient and Robust Machine Learning for Real-World Systems
Figure 2 for Efficient and Robust Machine Learning for Real-World Systems
Figure 3 for Efficient and Robust Machine Learning for Real-World Systems
Figure 4 for Efficient and Robust Machine Learning for Real-World Systems
Viaarxiv icon

Energy Efficient Hardware for On-Device CNN Inference via Transfer Learning

Add code
Dec 04, 2018
Figure 1 for Energy Efficient Hardware for On-Device CNN Inference via Transfer Learning
Figure 2 for Energy Efficient Hardware for On-Device CNN Inference via Transfer Learning
Figure 3 for Energy Efficient Hardware for On-Device CNN Inference via Transfer Learning
Viaarxiv icon