Alert button
Picture for Mohammad Rastegari

Mohammad Rastegari

Alert button

eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models

Add code
Bookmark button
Alert button
Sep 13, 2023
Minsik Cho, Keivan A. Vahid, Qichen Fu, Saurabh Adya, Carlo C Del Mundo, Mohammad Rastegari, Devang Naik, Peter Zatloukal

Figure 1 for eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models
Figure 2 for eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models
Figure 3 for eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models
Figure 4 for eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models
Viaarxiv icon

On the Efficacy of Multi-scale Data Samplers for Vision Applications

Add code
Bookmark button
Alert button
Sep 08, 2023
Elvis Nunez, Thomas Merth, Anish Prabhu, Mehrdad Farajtabar, Mohammad Rastegari, Sachin Mehta, Maxwell Horton

Figure 1 for On the Efficacy of Multi-scale Data Samplers for Vision Applications
Figure 2 for On the Efficacy of Multi-scale Data Samplers for Vision Applications
Figure 3 for On the Efficacy of Multi-scale Data Samplers for Vision Applications
Figure 4 for On the Efficacy of Multi-scale Data Samplers for Vision Applications
Viaarxiv icon

Bytes Are All You Need: Transformers Operating Directly On File Bytes

Add code
Bookmark button
Alert button
May 31, 2023
Maxwell Horton, Sachin Mehta, Ali Farhadi, Mohammad Rastegari

Figure 1 for Bytes Are All You Need: Transformers Operating Directly On File Bytes
Figure 2 for Bytes Are All You Need: Transformers Operating Directly On File Bytes
Figure 3 for Bytes Are All You Need: Transformers Operating Directly On File Bytes
Figure 4 for Bytes Are All You Need: Transformers Operating Directly On File Bytes
Viaarxiv icon

Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement

Add code
Bookmark button
Alert button
Mar 15, 2023
Fartash Faghri, Hadi Pouransari, Sachin Mehta, Mehrdad Farajtabar, Ali Farhadi, Mohammad Rastegari, Oncel Tuzel

Figure 1 for Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement
Figure 2 for Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement
Figure 3 for Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement
Figure 4 for Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement
Viaarxiv icon

RangeAugment: Efficient Online Augmentation with Range Learning

Add code
Bookmark button
Alert button
Dec 20, 2022
Sachin Mehta, Saeid Naderiparizi, Fartash Faghri, Maxwell Horton, Lailin Chen, Ali Farhadi, Oncel Tuzel, Mohammad Rastegari

Figure 1 for RangeAugment: Efficient Online Augmentation with Range Learning
Figure 2 for RangeAugment: Efficient Online Augmentation with Range Learning
Figure 3 for RangeAugment: Efficient Online Augmentation with Range Learning
Figure 4 for RangeAugment: Efficient Online Augmentation with Range Learning
Viaarxiv icon

SPIN: An Empirical Evaluation on Sharing Parameters of Isotropic Networks

Add code
Bookmark button
Alert button
Jul 21, 2022
Chien-Yu Lin, Anish Prabhu, Thomas Merth, Sachin Mehta, Anurag Ranjan, Maxwell Horton, Mohammad Rastegari

Figure 1 for SPIN: An Empirical Evaluation on Sharing Parameters of Isotropic Networks
Figure 2 for SPIN: An Empirical Evaluation on Sharing Parameters of Isotropic Networks
Figure 3 for SPIN: An Empirical Evaluation on Sharing Parameters of Isotropic Networks
Figure 4 for SPIN: An Empirical Evaluation on Sharing Parameters of Isotropic Networks
Viaarxiv icon

Separable Self-attention for Mobile Vision Transformers

Add code
Bookmark button
Alert button
Jun 06, 2022
Sachin Mehta, Mohammad Rastegari

Figure 1 for Separable Self-attention for Mobile Vision Transformers
Figure 2 for Separable Self-attention for Mobile Vision Transformers
Figure 3 for Separable Self-attention for Mobile Vision Transformers
Figure 4 for Separable Self-attention for Mobile Vision Transformers
Viaarxiv icon

CVNets: High Performance Library for Computer Vision

Add code
Bookmark button
Alert button
Jun 04, 2022
Sachin Mehta, Farzad Abdolhosseini, Mohammad Rastegari

Figure 1 for CVNets: High Performance Library for Computer Vision
Figure 2 for CVNets: High Performance Library for Computer Vision
Figure 3 for CVNets: High Performance Library for Computer Vision
Figure 4 for CVNets: High Performance Library for Computer Vision
Viaarxiv icon

Token Pooling in Vision Transformers

Add code
Bookmark button
Alert button
Oct 11, 2021
Dmitrii Marin, Jen-Hao Rick Chang, Anurag Ranjan, Anish Prabhu, Mohammad Rastegari, Oncel Tuzel

Figure 1 for Token Pooling in Vision Transformers
Figure 2 for Token Pooling in Vision Transformers
Figure 3 for Token Pooling in Vision Transformers
Figure 4 for Token Pooling in Vision Transformers
Viaarxiv icon