Picture for Masoud Asgharian

Masoud Asgharian

Department of Mathematics and Statistics, McGill University

OAC: Output-adaptive Calibration for Accurate Post-training Quantization

Add code
May 23, 2024
Figure 1 for OAC: Output-adaptive Calibration for Accurate Post-training Quantization
Figure 2 for OAC: Output-adaptive Calibration for Accurate Post-training Quantization
Figure 3 for OAC: Output-adaptive Calibration for Accurate Post-training Quantization
Figure 4 for OAC: Output-adaptive Calibration for Accurate Post-training Quantization
Viaarxiv icon

AdpQ: A Zero-shot Calibration Free Adaptive Post Training Quantization Method for LLMs

Add code
May 22, 2024
Figure 1 for AdpQ: A Zero-shot Calibration Free Adaptive Post Training Quantization Method for LLMs
Figure 2 for AdpQ: A Zero-shot Calibration Free Adaptive Post Training Quantization Method for LLMs
Figure 3 for AdpQ: A Zero-shot Calibration Free Adaptive Post Training Quantization Method for LLMs
Figure 4 for AdpQ: A Zero-shot Calibration Free Adaptive Post Training Quantization Method for LLMs
Viaarxiv icon

Mitigating Outlier Activations in Low-Precision Fine-Tuning of Language Models

Add code
Dec 15, 2023
Viaarxiv icon

Statistical Hardware Design With Multi-model Active Learning

Add code
Mar 26, 2023
Figure 1 for Statistical Hardware Design With Multi-model Active Learning
Figure 2 for Statistical Hardware Design With Multi-model Active Learning
Figure 3 for Statistical Hardware Design With Multi-model Active Learning
Figure 4 for Statistical Hardware Design With Multi-model Active Learning
Viaarxiv icon

Mathematical Challenges in Deep Learning

Add code
Mar 24, 2023
Figure 1 for Mathematical Challenges in Deep Learning
Figure 2 for Mathematical Challenges in Deep Learning
Figure 3 for Mathematical Challenges in Deep Learning
Figure 4 for Mathematical Challenges in Deep Learning
Viaarxiv icon

On the Convergence of Stochastic Gradient Descent in Low-precision Number Formats

Add code
Jan 09, 2023
Figure 1 for On the Convergence of Stochastic Gradient Descent in Low-precision Number Formats
Figure 2 for On the Convergence of Stochastic Gradient Descent in Low-precision Number Formats
Figure 3 for On the Convergence of Stochastic Gradient Descent in Low-precision Number Formats
Figure 4 for On the Convergence of Stochastic Gradient Descent in Low-precision Number Formats
Viaarxiv icon

EuclidNets: An Alternative Operation for Efficient Inference of Deep Learning Models

Add code
Dec 22, 2022
Figure 1 for EuclidNets: An Alternative Operation for Efficient Inference of Deep Learning Models
Figure 2 for EuclidNets: An Alternative Operation for Efficient Inference of Deep Learning Models
Figure 3 for EuclidNets: An Alternative Operation for Efficient Inference of Deep Learning Models
Figure 4 for EuclidNets: An Alternative Operation for Efficient Inference of Deep Learning Models
Viaarxiv icon

Training Integer-Only Deep Recurrent Neural Networks

Add code
Dec 22, 2022
Figure 1 for Training Integer-Only Deep Recurrent Neural Networks
Figure 2 for Training Integer-Only Deep Recurrent Neural Networks
Figure 3 for Training Integer-Only Deep Recurrent Neural Networks
Figure 4 for Training Integer-Only Deep Recurrent Neural Networks
Viaarxiv icon

Integer Fine-tuning of Transformer-based Models

Add code
Sep 20, 2022
Figure 1 for Integer Fine-tuning of Transformer-based Models
Figure 2 for Integer Fine-tuning of Transformer-based Models
Figure 3 for Integer Fine-tuning of Transformer-based Models
Figure 4 for Integer Fine-tuning of Transformer-based Models
Viaarxiv icon

Is Integer Arithmetic Enough for Deep Learning Training?

Add code
Jul 18, 2022
Figure 1 for Is Integer Arithmetic Enough for Deep Learning Training?
Figure 2 for Is Integer Arithmetic Enough for Deep Learning Training?
Figure 3 for Is Integer Arithmetic Enough for Deep Learning Training?
Figure 4 for Is Integer Arithmetic Enough for Deep Learning Training?
Viaarxiv icon