Alert button
Picture for Habib Hajimolahoseini

Habib Hajimolahoseini

Alert button

SkipViT: Speeding Up Vision Transformers with a Token-Level Skip Connection

Add code
Bookmark button
Alert button
Jan 27, 2024
Foozhan Ataiefard, Walid Ahmed, Habib Hajimolahoseini, Saina Asani, Farnoosh Javadi, Mohammad Hassanpour, Omar Mohamed Awad, Austin Wen, Kangling Liu, Yang Liu

Viaarxiv icon

SwiftLearn: A Data-Efficient Training Method of Deep Learning Models using Importance Sampling

Add code
Bookmark button
Alert button
Nov 25, 2023
Habib Hajimolahoseini, Omar Mohamed Awad, Walid Ahmed, Austin Wen, Saina Asani, Mohammad Hassanpour, Farnoosh Javadi, Mehdi Ahmadi, Foozhan Ataiefard, Kangling Liu, Yang Liu

Viaarxiv icon

GQKVA: Efficient Pre-training of Transformers by Grouping Queries, Keys, and Values

Add code
Bookmark button
Alert button
Nov 06, 2023
Farnoosh Javadi, Walid Ahmed, Habib Hajimolahoseini, Foozhan Ataiefard, Mohammad Hassanpour, Saina Asani, Austin Wen, Omar Mohamed Awad, Kangling Liu, Yang Liu

Viaarxiv icon

Speeding up Resnet Architecture with Layers Targeted Low Rank Decomposition

Add code
Bookmark button
Alert button
Sep 21, 2023
Walid Ahmed, Habib Hajimolahoseini, Austin Wen, Yang Liu

Figure 1 for Speeding up Resnet Architecture with Layers Targeted Low Rank Decomposition
Figure 2 for Speeding up Resnet Architecture with Layers Targeted Low Rank Decomposition
Figure 3 for Speeding up Resnet Architecture with Layers Targeted Low Rank Decomposition
Figure 4 for Speeding up Resnet Architecture with Layers Targeted Low Rank Decomposition
Viaarxiv icon

Improving Resnet-9 Generalization Trained on Small Datasets

Add code
Bookmark button
Alert button
Sep 07, 2023
Omar Mohamed Awad, Habib Hajimolahoseini, Michael Lim, Gurpreet Gosal, Walid Ahmed, Yang Liu, Gordon Deng

Figure 1 for Improving Resnet-9 Generalization Trained on Small Datasets
Figure 2 for Improving Resnet-9 Generalization Trained on Small Datasets
Viaarxiv icon

Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization

Add code
Bookmark button
Alert button
Sep 07, 2023
Habib Hajimolahoseini, Walid Ahmed, Yang Liu

Figure 1 for Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization
Figure 2 for Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization
Figure 3 for Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization
Figure 4 for Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization
Viaarxiv icon

A Short Study on Compressing Decoder-Based Language Models

Add code
Bookmark button
Alert button
Oct 16, 2021
Tianda Li, Yassir El Mesbahi, Ivan Kobyzev, Ahmad Rashid, Atif Mahmud, Nithin Anchuri, Habib Hajimolahoseini, Yang Liu, Mehdi Rezagholizadeh

Figure 1 for A Short Study on Compressing Decoder-Based Language Models
Figure 2 for A Short Study on Compressing Decoder-Based Language Models
Figure 3 for A Short Study on Compressing Decoder-Based Language Models
Figure 4 for A Short Study on Compressing Decoder-Based Language Models
Viaarxiv icon