Alert button
Picture for Matthew Mattina

Matthew Mattina

Alert button

Compressing Language Models using Doped Kronecker Products

Add code
Bookmark button
Alert button
Jan 31, 2020
Urmish Thakker, Paul Whatamough, Matthew Mattina, Jesse Beu

Figure 1 for Compressing Language Models using Doped Kronecker Products
Figure 2 for Compressing Language Models using Doped Kronecker Products
Figure 3 for Compressing Language Models using Doped Kronecker Products
Figure 4 for Compressing Language Models using Doped Kronecker Products
Viaarxiv icon

Noisy Machines: Understanding Noisy Neural Networks and Enhancing Robustness to Analog Hardware Errors Using Distillation

Add code
Bookmark button
Alert button
Jan 14, 2020
Chuteng Zhou, Prad Kadambi, Matthew Mattina, Paul N. Whatmough

Figure 1 for Noisy Machines: Understanding Noisy Neural Networks and Enhancing Robustness to Analog Hardware Errors Using Distillation
Figure 2 for Noisy Machines: Understanding Noisy Neural Networks and Enhancing Robustness to Analog Hardware Errors Using Distillation
Figure 3 for Noisy Machines: Understanding Noisy Neural Networks and Enhancing Robustness to Analog Hardware Errors Using Distillation
Figure 4 for Noisy Machines: Understanding Noisy Neural Networks and Enhancing Robustness to Analog Hardware Errors Using Distillation
Viaarxiv icon

ISP4ML: Understanding the Role of Image Signal Processing in Efficient Deep Learning Vision Systems

Add code
Bookmark button
Alert button
Nov 25, 2019
Patrick Hansen, Alexey Vilkin, Yury Khrustalev, James Imber, David Hanwell, Matthew Mattina, Paul N. Whatmough

Figure 1 for ISP4ML: Understanding the Role of Image Signal Processing in Efficient Deep Learning Vision Systems
Figure 2 for ISP4ML: Understanding the Role of Image Signal Processing in Efficient Deep Learning Vision Systems
Figure 3 for ISP4ML: Understanding the Role of Image Signal Processing in Efficient Deep Learning Vision Systems
Figure 4 for ISP4ML: Understanding the Role of Image Signal Processing in Efficient Deep Learning Vision Systems
Viaarxiv icon

Ternary MobileNets via Per-Layer Hybrid Filter Banks

Add code
Bookmark button
Alert button
Nov 04, 2019
Dibakar Gope, Jesse Beu, Urmish Thakker, Matthew Mattina

Figure 1 for Ternary MobileNets via Per-Layer Hybrid Filter Banks
Figure 2 for Ternary MobileNets via Per-Layer Hybrid Filter Banks
Figure 3 for Ternary MobileNets via Per-Layer Hybrid Filter Banks
Figure 4 for Ternary MobileNets via Per-Layer Hybrid Filter Banks
Viaarxiv icon

Pushing the limits of RNN Compression

Add code
Bookmark button
Alert button
Oct 09, 2019
Urmish Thakker, Igor Fedorov, Jesse Beu, Dibakar Gope, Chu Zhou, Ganesh Dasika, Matthew Mattina

Figure 1 for Pushing the limits of RNN Compression
Viaarxiv icon

Compressing RNNs for IoT devices by 15-38x using Kronecker Products

Add code
Bookmark button
Alert button
Jun 18, 2019
Urmish Thakker, Jesse Beu, Dibakar Gope, Chu Zhou, Igor Fedorov, Ganesh Dasika, Matthew Mattina

Figure 1 for Compressing RNNs for IoT devices by 15-38x using Kronecker Products
Figure 2 for Compressing RNNs for IoT devices by 15-38x using Kronecker Products
Figure 3 for Compressing RNNs for IoT devices by 15-38x using Kronecker Products
Figure 4 for Compressing RNNs for IoT devices by 15-38x using Kronecker Products
Viaarxiv icon

Run-Time Efficient RNN Compression for Inference on Edge Devices

Add code
Bookmark button
Alert button
Jun 18, 2019
Urmish Thakker, Jesse Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina

Figure 1 for Run-Time Efficient RNN Compression for Inference on Edge Devices
Figure 2 for Run-Time Efficient RNN Compression for Inference on Edge Devices
Figure 3 for Run-Time Efficient RNN Compression for Inference on Edge Devices
Figure 4 for Run-Time Efficient RNN Compression for Inference on Edge Devices
Viaarxiv icon