Alert button
Picture for Mehdi Rezagholizadeh

Mehdi Rezagholizadeh

Alert button

RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness

Add code
Bookmark button
Alert button
Feb 18, 2023
Heitor R. Guimarães, Arthur Pimentel, Anderson R. Avila, Mehdi Rezagholizadeh, Boxing Chen, Tiago H. Falk

Figure 1 for RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness
Figure 2 for RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness
Figure 3 for RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness
Figure 4 for RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness
Viaarxiv icon

Improved knowledge distillation by utilizing backward pass knowledge in neural networks

Add code
Bookmark button
Alert button
Jan 27, 2023
Aref Jafari, Mehdi Rezagholizadeh, Ali Ghodsi

Figure 1 for Improved knowledge distillation by utilizing backward pass knowledge in neural networks
Figure 2 for Improved knowledge distillation by utilizing backward pass knowledge in neural networks
Figure 3 for Improved knowledge distillation by utilizing backward pass knowledge in neural networks
Figure 4 for Improved knowledge distillation by utilizing backward pass knowledge in neural networks
Viaarxiv icon

KronA: Parameter Efficient Tuning with Kronecker Adapter

Add code
Bookmark button
Alert button
Dec 20, 2022
Ali Edalati, Marzieh Tahaei, Ivan Kobyzev, Vahid Partovi Nia, James J. Clark, Mehdi Rezagholizadeh

Figure 1 for KronA: Parameter Efficient Tuning with Kronecker Adapter
Figure 2 for KronA: Parameter Efficient Tuning with Kronecker Adapter
Figure 3 for KronA: Parameter Efficient Tuning with Kronecker Adapter
Figure 4 for KronA: Parameter Efficient Tuning with Kronecker Adapter
Viaarxiv icon

Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging

Add code
Bookmark button
Alert button
Dec 16, 2022
Peng Lu, Ivan Kobyzev, Mehdi Rezagholizadeh, Ahmad Rashid, Ali Ghodsi, Philippe Langlais

Figure 1 for Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging
Figure 2 for Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging
Figure 3 for Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging
Figure 4 for Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging
Viaarxiv icon

Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization

Add code
Bookmark button
Alert button
Dec 12, 2022
Aref Jafari, Ivan Kobyzev, Mehdi Rezagholizadeh, Pascal Poupart, Ali Ghodsi

Figure 1 for Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization
Figure 2 for Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization
Figure 3 for Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization
Figure 4 for Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization
Viaarxiv icon

Improving the Robustness of DistilHuBERT to Unseen Noisy Conditions via Data Augmentation, Curriculum Learning, and Multi-Task Enhancement

Add code
Bookmark button
Alert button
Nov 12, 2022
Heitor R. Guimarães, Arthur Pimentel, Anderson R. Avila, Mehdi Rezagholizadeh, Tiago H. Falk

Figure 1 for Improving the Robustness of DistilHuBERT to Unseen Noisy Conditions via Data Augmentation, Curriculum Learning, and Multi-Task Enhancement
Viaarxiv icon

Making a MIRACL: Multilingual Information Retrieval Across a Continuum of Languages

Add code
Bookmark button
Alert button
Oct 18, 2022
Xinyu Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Mehdi Rezagholizadeh, Jimmy Lin

Figure 1 for Making a MIRACL: Multilingual Information Retrieval Across a Continuum of Languages
Figure 2 for Making a MIRACL: Multilingual Information Retrieval Across a Continuum of Languages
Figure 3 for Making a MIRACL: Multilingual Information Retrieval Across a Continuum of Languages
Viaarxiv icon

DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation

Add code
Bookmark button
Alert button
Oct 14, 2022
Mojtaba Valipour, Mehdi Rezagholizadeh, Ivan Kobyzev, Ali Ghodsi

Figure 1 for DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation
Figure 2 for DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation
Figure 3 for DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation
Figure 4 for DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation
Viaarxiv icon

Integer Fine-tuning of Transformer-based Models

Add code
Bookmark button
Alert button
Sep 20, 2022
Mohammadreza Tayaranian, Alireza Ghaffari, Marzieh S. Tahaei, Mehdi Rezagholizadeh, Masoud Asgharian, Vahid Partovi Nia

Figure 1 for Integer Fine-tuning of Transformer-based Models
Figure 2 for Integer Fine-tuning of Transformer-based Models
Figure 3 for Integer Fine-tuning of Transformer-based Models
Figure 4 for Integer Fine-tuning of Transformer-based Models
Viaarxiv icon