Alert button
Picture for Jun Nishikawa

Jun Nishikawa

Alert button

n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization

Add code
Bookmark button
Alert button
Mar 22, 2021
Yuiko Sakuma, Hiroshi Sumihiro, Jun Nishikawa, Toshiki Nakamura, Ryoji Ikegaya

Figure 1 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Figure 2 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Figure 3 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Figure 4 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Viaarxiv icon

Filter Pre-Pruning for Improved Fine-tuning of Quantized Deep Neural Networks

Add code
Bookmark button
Alert button
Nov 25, 2020
Jun Nishikawa, Ryoji Ikegaya

Figure 1 for Filter Pre-Pruning for Improved Fine-tuning of Quantized Deep Neural Networks
Figure 2 for Filter Pre-Pruning for Improved Fine-tuning of Quantized Deep Neural Networks
Figure 3 for Filter Pre-Pruning for Improved Fine-tuning of Quantized Deep Neural Networks
Figure 4 for Filter Pre-Pruning for Improved Fine-tuning of Quantized Deep Neural Networks
Viaarxiv icon