Picture for Ivan Lazarevich

Ivan Lazarevich

QGen: On the Ability to Generalize in Quantization Aware Training

Add code
Apr 19, 2024
Viaarxiv icon

Accelerating Deep Neural Networks via Semi-Structured Activation Sparsity

Add code
Sep 27, 2023
Figure 1 for Accelerating Deep Neural Networks via Semi-Structured Activation Sparsity
Figure 2 for Accelerating Deep Neural Networks via Semi-Structured Activation Sparsity
Figure 3 for Accelerating Deep Neural Networks via Semi-Structured Activation Sparsity
Figure 4 for Accelerating Deep Neural Networks via Semi-Structured Activation Sparsity
Viaarxiv icon

YOLOBench: Benchmarking Efficient Object Detectors on Embedded Systems

Add code
Jul 26, 2023
Figure 1 for YOLOBench: Benchmarking Efficient Object Detectors on Embedded Systems
Figure 2 for YOLOBench: Benchmarking Efficient Object Detectors on Embedded Systems
Figure 3 for YOLOBench: Benchmarking Efficient Object Detectors on Embedded Systems
Figure 4 for YOLOBench: Benchmarking Efficient Object Detectors on Embedded Systems
Viaarxiv icon

QReg: On Regularization Effects of Quantization

Add code
Jun 27, 2022
Figure 1 for QReg: On Regularization Effects of Quantization
Figure 2 for QReg: On Regularization Effects of Quantization
Figure 3 for QReg: On Regularization Effects of Quantization
Figure 4 for QReg: On Regularization Effects of Quantization
Viaarxiv icon

Post-training deep neural network pruning via layer-wise calibration

Add code
Apr 30, 2021
Figure 1 for Post-training deep neural network pruning via layer-wise calibration
Figure 2 for Post-training deep neural network pruning via layer-wise calibration
Figure 3 for Post-training deep neural network pruning via layer-wise calibration
Figure 4 for Post-training deep neural network pruning via layer-wise calibration
Viaarxiv icon

Neural Network Compression Framework for fast model inference

Add code
Mar 12, 2020
Figure 1 for Neural Network Compression Framework for fast model inference
Figure 2 for Neural Network Compression Framework for fast model inference
Figure 3 for Neural Network Compression Framework for fast model inference
Figure 4 for Neural Network Compression Framework for fast model inference
Viaarxiv icon