Alert button
Picture for Omar Mohamed Awad

Omar Mohamed Awad

Alert button

SkipViT: Speeding Up Vision Transformers with a Token-Level Skip Connection

Add code
Bookmark button
Alert button
Jan 27, 2024
Foozhan Ataiefard, Walid Ahmed, Habib Hajimolahoseini, Saina Asani, Farnoosh Javadi, Mohammad Hassanpour, Omar Mohamed Awad, Austin Wen, Kangling Liu, Yang Liu

Viaarxiv icon

SwiftLearn: A Data-Efficient Training Method of Deep Learning Models using Importance Sampling

Add code
Bookmark button
Alert button
Nov 25, 2023
Habib Hajimolahoseini, Omar Mohamed Awad, Walid Ahmed, Austin Wen, Saina Asani, Mohammad Hassanpour, Farnoosh Javadi, Mehdi Ahmadi, Foozhan Ataiefard, Kangling Liu, Yang Liu

Viaarxiv icon

GQKVA: Efficient Pre-training of Transformers by Grouping Queries, Keys, and Values

Add code
Bookmark button
Alert button
Nov 06, 2023
Farnoosh Javadi, Walid Ahmed, Habib Hajimolahoseini, Foozhan Ataiefard, Mohammad Hassanpour, Saina Asani, Austin Wen, Omar Mohamed Awad, Kangling Liu, Yang Liu

Viaarxiv icon

Improving Resnet-9 Generalization Trained on Small Datasets

Add code
Bookmark button
Alert button
Sep 07, 2023
Omar Mohamed Awad, Habib Hajimolahoseini, Michael Lim, Gurpreet Gosal, Walid Ahmed, Yang Liu, Gordon Deng

Figure 1 for Improving Resnet-9 Generalization Trained on Small Datasets
Figure 2 for Improving Resnet-9 Generalization Trained on Small Datasets
Viaarxiv icon

FPRaker: A Processing Element For Accelerating Neural Network Training

Add code
Bookmark button
Alert button
Oct 15, 2020
Omar Mohamed Awad, Mostafa Mahmoud, Isak Edo, Ali Hadi Zadeh, Ciaran Bannon, Anand Jayarajan, Gennady Pekhimenko, Andreas Moshovos

Figure 1 for FPRaker: A Processing Element For Accelerating Neural Network Training
Figure 2 for FPRaker: A Processing Element For Accelerating Neural Network Training
Figure 3 for FPRaker: A Processing Element For Accelerating Neural Network Training
Figure 4 for FPRaker: A Processing Element For Accelerating Neural Network Training
Viaarxiv icon

TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference

Add code
Bookmark button
Alert button
Sep 01, 2020
Mostafa Mahmoud, Isak Edo, Ali Hadi Zadeh, Omar Mohamed Awad, Gennady Pekhimenko, Jorge Albericio, Andreas Moshovos

Figure 1 for TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference
Figure 2 for TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference
Figure 3 for TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference
Figure 4 for TensorDash: Exploiting Sparsity to Accelerate Deep Neural Network Training and Inference
Viaarxiv icon