Picture for Charbel Sakr

Charbel Sakr

ESPACE: Dimensionality Reduction of Activations for Model Compression

Add code
Oct 07, 2024
Viaarxiv icon

VaPr: Variable-Precision Tensors to Accelerate Robot Motion Planning

Add code
Oct 11, 2023
Viaarxiv icon

Optimal Clipping and Magnitude-aware Differentiation for Improved Quantization-aware Training

Add code
Jun 13, 2022
Figure 1 for Optimal Clipping and Magnitude-aware Differentiation for Improved Quantization-aware Training
Figure 2 for Optimal Clipping and Magnitude-aware Differentiation for Improved Quantization-aware Training
Figure 3 for Optimal Clipping and Magnitude-aware Differentiation for Improved Quantization-aware Training
Figure 4 for Optimal Clipping and Magnitude-aware Differentiation for Improved Quantization-aware Training
Viaarxiv icon

Fundamental Limits on Energy-Delay-Accuracy of In-memory Architectures in Inference Applications

Add code
Dec 25, 2020
Figure 1 for Fundamental Limits on Energy-Delay-Accuracy of In-memory Architectures in Inference Applications
Figure 2 for Fundamental Limits on Energy-Delay-Accuracy of In-memory Architectures in Inference Applications
Figure 3 for Fundamental Limits on Energy-Delay-Accuracy of In-memory Architectures in Inference Applications
Figure 4 for Fundamental Limits on Energy-Delay-Accuracy of In-memory Architectures in Inference Applications
Viaarxiv icon

HarDNN: Feature Map Vulnerability Evaluation in CNNs

Add code
Feb 25, 2020
Figure 1 for HarDNN: Feature Map Vulnerability Evaluation in CNNs
Figure 2 for HarDNN: Feature Map Vulnerability Evaluation in CNNs
Figure 3 for HarDNN: Feature Map Vulnerability Evaluation in CNNs
Figure 4 for HarDNN: Feature Map Vulnerability Evaluation in CNNs
Viaarxiv icon

Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks

Add code
Jan 19, 2019
Figure 1 for Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Figure 2 for Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Figure 3 for Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Figure 4 for Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Viaarxiv icon

Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm

Add code
Dec 31, 2018
Figure 1 for Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm
Figure 2 for Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm
Figure 3 for Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm
Figure 4 for Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm
Viaarxiv icon

Understanding the Energy and Precision Requirements for Online Learning

Add code
Aug 26, 2016
Figure 1 for Understanding the Energy and Precision Requirements for Online Learning
Figure 2 for Understanding the Energy and Precision Requirements for Online Learning
Figure 3 for Understanding the Energy and Precision Requirements for Online Learning
Figure 4 for Understanding the Energy and Precision Requirements for Online Learning
Viaarxiv icon