Alert button
Picture for Ali Ghodsi

Ali Ghodsi

Alert button

Learning Chemotherapy Drug Action via Universal Physics-Informed Neural Networks

Add code
Bookmark button
Alert button
Apr 11, 2024
Lena Podina, Ali Ghodsi, Mohammad Kohandel

Viaarxiv icon

Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling

Add code
Bookmark button
Alert button
Feb 28, 2024
Mahdi Karami, Ali Ghodsi

Viaarxiv icon

QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning

Add code
Bookmark button
Alert button
Feb 16, 2024
Hossein Rajabzadeh, Mojtaba Valipour, Tianshu Zhu, Marzieh Tahaei, Hyock Ju Kwon, Ali Ghodsi, Boxing Chen, Mehdi Rezagholizadeh

Viaarxiv icon

Scalable Graph Self-Supervised Learning

Add code
Bookmark button
Alert button
Feb 14, 2024
Ali Saheb Pasand, Reza Moravej, Mahdi Biparva, Raika Karimi, Ali Ghodsi

Viaarxiv icon

WERank: Towards Rank Degradation Prevention for Self-Supervised Learning Using Weight Regularization

Add code
Bookmark button
Alert button
Feb 14, 2024
Ali Saheb Pasand, Reza Moravej, Mahdi Biparva, Ali Ghodsi

Viaarxiv icon

Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)

Add code
Bookmark button
Alert button
Sep 16, 2023
Parsa Kavehzadeh, Mojtaba Valipour, Marzieh Tahaei, Ali Ghodsi, Boxing Chen, Mehdi Rezagholizadeh

Figure 1 for Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Figure 2 for Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Figure 3 for Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Figure 4 for Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT)
Viaarxiv icon

SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks

Add code
Bookmark button
Alert button
Sep 01, 2023
Mojtaba Valipour, Mehdi Rezagholizadeh, Hossein Rajabzadeh, Marzieh Tahaei, Boxing Chen, Ali Ghodsi

Figure 1 for SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks
Figure 2 for SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks
Figure 3 for SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks
Figure 4 for SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks
Viaarxiv icon

Recurrent Neural Networks and Long Short-Term Memory Networks: Tutorial and Survey

Add code
Bookmark button
Alert button
Apr 22, 2023
Benyamin Ghojogh, Ali Ghodsi

Figure 1 for Recurrent Neural Networks and Long Short-Term Memory Networks: Tutorial and Survey
Figure 2 for Recurrent Neural Networks and Long Short-Term Memory Networks: Tutorial and Survey
Figure 3 for Recurrent Neural Networks and Long Short-Term Memory Networks: Tutorial and Survey
Figure 4 for Recurrent Neural Networks and Long Short-Term Memory Networks: Tutorial and Survey
Viaarxiv icon

Improved knowledge distillation by utilizing backward pass knowledge in neural networks

Add code
Bookmark button
Alert button
Jan 27, 2023
Aref Jafari, Mehdi Rezagholizadeh, Ali Ghodsi

Figure 1 for Improved knowledge distillation by utilizing backward pass knowledge in neural networks
Figure 2 for Improved knowledge distillation by utilizing backward pass knowledge in neural networks
Figure 3 for Improved knowledge distillation by utilizing backward pass knowledge in neural networks
Figure 4 for Improved knowledge distillation by utilizing backward pass knowledge in neural networks
Viaarxiv icon

Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging

Add code
Bookmark button
Alert button
Dec 16, 2022
Peng Lu, Ivan Kobyzev, Mehdi Rezagholizadeh, Ahmad Rashid, Ali Ghodsi, Philippe Langlais

Figure 1 for Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging
Figure 2 for Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging
Figure 3 for Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging
Figure 4 for Improving Generalization of Pre-trained Language Models via Stochastic Weight Averaging
Viaarxiv icon