Alert button
Picture for Mehdi Rezagholizadeh

Mehdi Rezagholizadeh

Alert button

Towards Practical Tool Usage for Continually Learning LLMs

Add code
Bookmark button
Alert button
Apr 14, 2024
Jerry Huang, Prasanna Parthasarathi, Mehdi Rezagholizadeh, Sarath Chandar

Viaarxiv icon

An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task Learning

Add code
Bookmark button
Alert button
Mar 13, 2024
Heitor R. Guimarães, Arthur Pimentel, Anderson R. Avila, Mehdi Rezagholizadeh, Boxing Chen, Tiago H. Falk

Figure 1 for An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task Learning
Figure 2 for An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task Learning
Figure 3 for An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task Learning
Figure 4 for An Efficient End-to-End Approach to Noise Invariant Speech Features via Multi-Task Learning
Viaarxiv icon

Resonance RoPE: Improving Context Length Generalization of Large Language Models

Add code
Bookmark button
Alert button
Feb 29, 2024
Suyuchen Wang, Ivan Kobyzev, Peng Lu, Mehdi Rezagholizadeh, Bang Liu

Figure 1 for Resonance RoPE: Improving Context Length Generalization of Large Language Models
Figure 2 for Resonance RoPE: Improving Context Length Generalization of Large Language Models
Figure 3 for Resonance RoPE: Improving Context Length Generalization of Large Language Models
Figure 4 for Resonance RoPE: Improving Context Length Generalization of Large Language Models
Viaarxiv icon

QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning

Add code
Bookmark button
Alert button
Feb 16, 2024
Hossein Rajabzadeh, Mojtaba Valipour, Tianshu Zhu, Marzieh Tahaei, Hyock Ju Kwon, Ali Ghodsi, Boxing Chen, Mehdi Rezagholizadeh

Viaarxiv icon

Beyond the Limits: A Survey of Techniques to Extend the Context Length in Large Language Models

Add code
Bookmark button
Alert button
Feb 03, 2024
Xindi Wang, Mahsa Salmani, Parsa Omidi, Xiangyu Ren, Mehdi Rezagholizadeh, Armaghan Eshaghi

Viaarxiv icon

On the importance of Data Scale in Pretraining Arabic Language Models

Add code
Bookmark button
Alert button
Jan 15, 2024
Abbas Ghaddar, Philippe Langlais, Mehdi Rezagholizadeh, Boxing Chen

Viaarxiv icon

NoMIRACL: Knowing When You Don't Know for Robust Multilingual Retrieval-Augmented Generation

Add code
Bookmark button
Alert button
Dec 18, 2023
Nandan Thakur, Luiz Bonifacio, Xinyu Zhang, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Boxing Chen, Mehdi Rezagholizadeh, Jimmy Lin

Viaarxiv icon

On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''

Add code
Bookmark button
Alert button
Sep 25, 2023
Arthur Pimentel, Heitor Guimarães, Anderson R. Avila, Mehdi Rezagholizadeh, Tiago H. Falk

Figure 1 for On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''
Figure 2 for On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''
Figure 3 for On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''
Figure 4 for On the Impact of Quantization and Pruning of Self-Supervised Speech Models for Downstream Speech Recognition Tasks "In-the-Wild''
Viaarxiv icon

Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)

Add code
Bookmark button
Alert button
Sep 16, 2023
Parsa Kavehzadeh, Mojtaba Valipour, Marzieh Tahaei, Ali Ghodsi, Boxing Chen, Mehdi Rezagholizadeh

Figure 1 for Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Figure 2 for Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Figure 3 for Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Figure 4 for Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Viaarxiv icon

SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks

Add code
Bookmark button
Alert button
Sep 01, 2023
Mojtaba Valipour, Mehdi Rezagholizadeh, Hossein Rajabzadeh, Marzieh Tahaei, Boxing Chen, Ali Ghodsi

Figure 1 for SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks
Figure 2 for SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks
Figure 3 for SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks
Figure 4 for SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks
Viaarxiv icon