BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization

Add code
Feb 20, 2021
Figure 1 for BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization
Figure 2 for BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization
Figure 3 for BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization
Figure 4 for BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization

Share this with someone who'll enjoy it:

View paper onarxiv iconopen_review iconOpenReview

Share this with someone who'll enjoy it: