Alert button
Picture for Hadi Pouransari

Hadi Pouransari

Alert button

Label-efficient Training of Small Task-specific Models by Leveraging Vision Foundation Models

Add code
Bookmark button
Alert button
Nov 30, 2023
Raviteja Vemulapalli, Hadi Pouransari, Fartash Faghri, Sachin Mehta, Mehrdad Farajtabar, Mohammad Rastegari, Oncel Tuzel

Viaarxiv icon

MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training

Add code
Bookmark button
Alert button
Nov 28, 2023
Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel

Viaarxiv icon

TiC-CLIP: Continual Training of CLIP Models

Add code
Bookmark button
Alert button
Oct 24, 2023
Saurabh Garg, Mehrdad Farajtabar, Hadi Pouransari, Raviteja Vemulapalli, Sachin Mehta, Oncel Tuzel, Vaishaal Shankar, Fartash Faghri

Viaarxiv icon

SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding

Add code
Bookmark button
Alert button
Oct 23, 2023
Haoxiang Wang, Pavan Kumar Anasosalu Vasu, Fartash Faghri, Raviteja Vemulapalli, Mehrdad Farajtabar, Sachin Mehta, Mohammad Rastegari, Oncel Tuzel, Hadi Pouransari

Figure 1 for SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding
Figure 2 for SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding
Figure 3 for SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding
Figure 4 for SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding
Viaarxiv icon

CLIP meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement

Add code
Bookmark button
Alert button
Oct 21, 2023
Mohammadreza Salehi, Mehrdad Farajtabar, Maxwell Horton, Fartash Faghri, Hadi Pouransari, Raviteja Vemulapalli, Oncel Tuzel, Ali Farhadi, Mohammad Rastegari, Sachin Mehta

Figure 1 for CLIP meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement
Figure 2 for CLIP meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement
Figure 3 for CLIP meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement
Figure 4 for CLIP meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement
Viaarxiv icon

Frequency-Aware Masked Autoencoders for Multimodal Pretraining on Biosignals

Add code
Bookmark button
Alert button
Sep 12, 2023
Ran Liu, Ellen L. Zippi, Hadi Pouransari, Chris Sandino, Jingping Nie, Hanlin Goh, Erdrin Azemi, Ali Moin

Figure 1 for Frequency-Aware Masked Autoencoders for Multimodal Pretraining on Biosignals
Figure 2 for Frequency-Aware Masked Autoencoders for Multimodal Pretraining on Biosignals
Figure 3 for Frequency-Aware Masked Autoencoders for Multimodal Pretraining on Biosignals
Figure 4 for Frequency-Aware Masked Autoencoders for Multimodal Pretraining on Biosignals
Viaarxiv icon

Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement

Add code
Bookmark button
Alert button
Mar 15, 2023
Fartash Faghri, Hadi Pouransari, Sachin Mehta, Mehrdad Farajtabar, Ali Farhadi, Mohammad Rastegari, Oncel Tuzel

Figure 1 for Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement
Figure 2 for Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement
Figure 3 for Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement
Figure 4 for Reinforce Data, Multiply Impact: Improved Model Accuracy and Robustness with Dataset Reinforcement
Viaarxiv icon

FastFill: Efficient Compatible Model Update

Add code
Bookmark button
Alert button
Mar 08, 2023
Florian Jaeckle, Fartash Faghri, Ali Farhadi, Oncel Tuzel, Hadi Pouransari

Figure 1 for FastFill: Efficient Compatible Model Update
Figure 2 for FastFill: Efficient Compatible Model Update
Figure 3 for FastFill: Efficient Compatible Model Update
Figure 4 for FastFill: Efficient Compatible Model Update
Viaarxiv icon

APE: Aligning Pretrained Encoders to Quickly Learn Aligned Multimodal Representations

Add code
Bookmark button
Alert button
Oct 08, 2022
Elan Rosenfeld, Preetum Nakkiran, Hadi Pouransari, Oncel Tuzel, Fartash Faghri

Figure 1 for APE: Aligning Pretrained Encoders to Quickly Learn Aligned Multimodal Representations
Figure 2 for APE: Aligning Pretrained Encoders to Quickly Learn Aligned Multimodal Representations
Figure 3 for APE: Aligning Pretrained Encoders to Quickly Learn Aligned Multimodal Representations
Figure 4 for APE: Aligning Pretrained Encoders to Quickly Learn Aligned Multimodal Representations
Viaarxiv icon

Forward Compatible Training for Representation Learning

Add code
Bookmark button
Alert button
Dec 06, 2021
Vivek Ramanujan, Pavan Kumar Anasosalu Vasu, Ali Farhadi, Oncel Tuzel, Hadi Pouransari

Figure 1 for Forward Compatible Training for Representation Learning
Figure 2 for Forward Compatible Training for Representation Learning
Figure 3 for Forward Compatible Training for Representation Learning
Figure 4 for Forward Compatible Training for Representation Learning
Viaarxiv icon