Picture for Hadi Pouransari

Hadi Pouransari

Promoting cross-modal representations to improve multimodal foundation models for physiological signals

Add code
Oct 21, 2024
Figure 1 for Promoting cross-modal representations to improve multimodal foundation models for physiological signals
Figure 2 for Promoting cross-modal representations to improve multimodal foundation models for physiological signals
Figure 3 for Promoting cross-modal representations to improve multimodal foundation models for physiological signals
Figure 4 for Promoting cross-modal representations to improve multimodal foundation models for physiological signals
Viaarxiv icon

Generalizable autoregressive modeling of time series through functional narratives

Add code
Oct 10, 2024
Figure 1 for Generalizable autoregressive modeling of time series through functional narratives
Figure 2 for Generalizable autoregressive modeling of time series through functional narratives
Figure 3 for Generalizable autoregressive modeling of time series through functional narratives
Figure 4 for Generalizable autoregressive modeling of time series through functional narratives
Viaarxiv icon

MUSCLE: A Model Update Strategy for Compatible LLM Evolution

Add code
Jul 12, 2024
Figure 1 for MUSCLE: A Model Update Strategy for Compatible LLM Evolution
Figure 2 for MUSCLE: A Model Update Strategy for Compatible LLM Evolution
Figure 3 for MUSCLE: A Model Update Strategy for Compatible LLM Evolution
Figure 4 for MUSCLE: A Model Update Strategy for Compatible LLM Evolution
Viaarxiv icon

DataComp-LM: In search of the next generation of training sets for language models

Add code
Jun 18, 2024
Figure 1 for DataComp-LM: In search of the next generation of training sets for language models
Figure 2 for DataComp-LM: In search of the next generation of training sets for language models
Figure 3 for DataComp-LM: In search of the next generation of training sets for language models
Figure 4 for DataComp-LM: In search of the next generation of training sets for language models
Viaarxiv icon

Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum

Add code
May 21, 2024
Figure 1 for Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum
Figure 2 for Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum
Figure 3 for Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum
Figure 4 for Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum
Viaarxiv icon

CLIP with Quality Captions: A Strong Pretraining for Vision Tasks

Add code
May 14, 2024
Figure 1 for CLIP with Quality Captions: A Strong Pretraining for Vision Tasks
Figure 2 for CLIP with Quality Captions: A Strong Pretraining for Vision Tasks
Figure 3 for CLIP with Quality Captions: A Strong Pretraining for Vision Tasks
Figure 4 for CLIP with Quality Captions: A Strong Pretraining for Vision Tasks
Viaarxiv icon

Label-efficient Training of Small Task-specific Models by Leveraging Vision Foundation Models

Add code
Nov 30, 2023
Figure 1 for Label-efficient Training of Small Task-specific Models by Leveraging Vision Foundation Models
Figure 2 for Label-efficient Training of Small Task-specific Models by Leveraging Vision Foundation Models
Figure 3 for Label-efficient Training of Small Task-specific Models by Leveraging Vision Foundation Models
Figure 4 for Label-efficient Training of Small Task-specific Models by Leveraging Vision Foundation Models
Viaarxiv icon

MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training

Add code
Nov 28, 2023
Figure 1 for MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
Figure 2 for MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
Figure 3 for MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
Figure 4 for MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
Viaarxiv icon

TiC-CLIP: Continual Training of CLIP Models

Add code
Oct 24, 2023
Viaarxiv icon

SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding

Add code
Oct 23, 2023
Figure 1 for SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding
Figure 2 for SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding
Figure 3 for SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding
Figure 4 for SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding
Viaarxiv icon