Alert button
Picture for Jesse Beu

Jesse Beu

Alert button

Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices

Add code
Bookmark button
Alert button
Feb 14, 2021
Urmish Thakker, Paul N. Whatmough, Zhigang Liu, Matthew Mattina, Jesse Beu

Figure 1 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 2 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 3 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Figure 4 for Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Viaarxiv icon

Rank and run-time aware compression of NLP Applications

Add code
Bookmark button
Alert button
Oct 06, 2020
Urmish Thakker, Jesse Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina

Figure 1 for Rank and run-time aware compression of NLP Applications
Figure 2 for Rank and run-time aware compression of NLP Applications
Figure 3 for Rank and run-time aware compression of NLP Applications
Viaarxiv icon

High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands

Add code
Bookmark button
Alert button
Aug 03, 2020
Dibakar Gope, Jesse Beu, Matthew Mattina

Figure 1 for High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands
Figure 2 for High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands
Figure 3 for High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands
Figure 4 for High Throughput Matrix-Matrix Multiplication between Asymmetric Bit-Width Operands
Viaarxiv icon

Compressing Language Models using Doped Kronecker Products

Add code
Bookmark button
Alert button
Jan 31, 2020
Urmish Thakker, Paul Whatamough, Matthew Mattina, Jesse Beu

Figure 1 for Compressing Language Models using Doped Kronecker Products
Figure 2 for Compressing Language Models using Doped Kronecker Products
Figure 3 for Compressing Language Models using Doped Kronecker Products
Figure 4 for Compressing Language Models using Doped Kronecker Products
Viaarxiv icon

Ternary MobileNets via Per-Layer Hybrid Filter Banks

Add code
Bookmark button
Alert button
Nov 04, 2019
Dibakar Gope, Jesse Beu, Urmish Thakker, Matthew Mattina

Figure 1 for Ternary MobileNets via Per-Layer Hybrid Filter Banks
Figure 2 for Ternary MobileNets via Per-Layer Hybrid Filter Banks
Figure 3 for Ternary MobileNets via Per-Layer Hybrid Filter Banks
Figure 4 for Ternary MobileNets via Per-Layer Hybrid Filter Banks
Viaarxiv icon

Pushing the limits of RNN Compression

Add code
Bookmark button
Alert button
Oct 09, 2019
Urmish Thakker, Igor Fedorov, Jesse Beu, Dibakar Gope, Chu Zhou, Ganesh Dasika, Matthew Mattina

Figure 1 for Pushing the limits of RNN Compression
Viaarxiv icon

Compressing RNNs for IoT devices by 15-38x using Kronecker Products

Add code
Bookmark button
Alert button
Jun 18, 2019
Urmish Thakker, Jesse Beu, Dibakar Gope, Chu Zhou, Igor Fedorov, Ganesh Dasika, Matthew Mattina

Figure 1 for Compressing RNNs for IoT devices by 15-38x using Kronecker Products
Figure 2 for Compressing RNNs for IoT devices by 15-38x using Kronecker Products
Figure 3 for Compressing RNNs for IoT devices by 15-38x using Kronecker Products
Figure 4 for Compressing RNNs for IoT devices by 15-38x using Kronecker Products
Viaarxiv icon

Run-Time Efficient RNN Compression for Inference on Edge Devices

Add code
Bookmark button
Alert button
Jun 18, 2019
Urmish Thakker, Jesse Beu, Dibakar Gope, Ganesh Dasika, Matthew Mattina

Figure 1 for Run-Time Efficient RNN Compression for Inference on Edge Devices
Figure 2 for Run-Time Efficient RNN Compression for Inference on Edge Devices
Figure 3 for Run-Time Efficient RNN Compression for Inference on Edge Devices
Figure 4 for Run-Time Efficient RNN Compression for Inference on Edge Devices
Viaarxiv icon