Alert button
Picture for Fartash Faghri

Fartash Faghri

Alert button

Weight subcloning: direct initialization of transformers using larger pretrained ones

Dec 14, 2023
Mohammad Samragh, Mehrdad Farajtabar, Sachin Mehta, Raviteja Vemulapalli, Fartash Faghri, Devang Naik, Oncel Tuzel, Mohammad Rastegari

Viaarxiv icon

Label-efficient Training of Small Task-specific Models by Leveraging Vision Foundation Models

Nov 30, 2023
Raviteja Vemulapalli, Hadi Pouransari, Fartash Faghri, Sachin Mehta, Mehrdad Farajtabar, Mohammad Rastegari, Oncel Tuzel

Viaarxiv icon

MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training

Nov 28, 2023
Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel

Viaarxiv icon

TiC-CLIP: Continual Training of CLIP Models

Oct 24, 2023
Saurabh Garg, Mehrdad Farajtabar, Hadi Pouransari, Raviteja Vemulapalli, Sachin Mehta, Oncel Tuzel, Vaishaal Shankar, Fartash Faghri

Viaarxiv icon

SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding

Oct 23, 2023
Haoxiang Wang, Pavan Kumar Anasosalu Vasu, Fartash Faghri, Raviteja Vemulapalli, Mehrdad Farajtabar, Sachin Mehta, Mohammad Rastegari, Oncel Tuzel, Hadi Pouransari

Figure 1 for SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding
Figure 2 for SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding
Figure 3 for SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding
Figure 4 for SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding
Viaarxiv icon

CLIP meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement

Oct 21, 2023
Mohammadreza Salehi, Mehrdad Farajtabar, Maxwell Horton, Fartash Faghri, Hadi Pouransari, Raviteja Vemulapalli, Oncel Tuzel, Ali Farhadi, Mohammad Rastegari, Sachin Mehta

Figure 1 for CLIP meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement
Figure 2 for CLIP meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement
Figure 3 for CLIP meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement
Figure 4 for CLIP meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement
Viaarxiv icon

Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement

Mar 15, 2023
Fartash Faghri, Hadi Pouransari, Sachin Mehta, Mehrdad Farajtabar, Ali Farhadi, Mohammad Rastegari, Oncel Tuzel

Figure 1 for Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement
Figure 2 for Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement
Figure 3 for Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement
Figure 4 for Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement
Viaarxiv icon

FastFill: Efficient Compatible Model Update

Mar 08, 2023
Florian Jaeckle, Fartash Faghri, Ali Farhadi, Oncel Tuzel, Hadi Pouransari

Figure 1 for FastFill: Efficient Compatible Model Update
Figure 2 for FastFill: Efficient Compatible Model Update
Figure 3 for FastFill: Efficient Compatible Model Update
Figure 4 for FastFill: Efficient Compatible Model Update
Viaarxiv icon

RangeAugment: Efficient Online Augmentation with Range Learning

Dec 20, 2022
Sachin Mehta, Saeid Naderiparizi, Fartash Faghri, Maxwell Horton, Lailin Chen, Ali Farhadi, Oncel Tuzel, Mohammad Rastegari

Figure 1 for RangeAugment: Efficient Online Augmentation with Range Learning
Figure 2 for RangeAugment: Efficient Online Augmentation with Range Learning
Figure 3 for RangeAugment: Efficient Online Augmentation with Range Learning
Figure 4 for RangeAugment: Efficient Online Augmentation with Range Learning
Viaarxiv icon

APE: Aligning Pretrained Encoders to Quickly Learn Aligned Multimodal Representations

Oct 08, 2022
Elan Rosenfeld, Preetum Nakkiran, Hadi Pouransari, Oncel Tuzel, Fartash Faghri

Figure 1 for APE: Aligning Pretrained Encoders to Quickly Learn Aligned Multimodal Representations
Figure 2 for APE: Aligning Pretrained Encoders to Quickly Learn Aligned Multimodal Representations
Figure 3 for APE: Aligning Pretrained Encoders to Quickly Learn Aligned Multimodal Representations
Figure 4 for APE: Aligning Pretrained Encoders to Quickly Learn Aligned Multimodal Representations
Viaarxiv icon