Picture for Basil Mustafa

Basil Mustafa

Dima

Three Towers: Flexible Contrastive Learning with Pretrained Image Models

Add code
May 29, 2023
Figure 1 for Three Towers: Flexible Contrastive Learning with Pretrained Image Models
Figure 2 for Three Towers: Flexible Contrastive Learning with Pretrained Image Models
Figure 3 for Three Towers: Flexible Contrastive Learning with Pretrained Image Models
Figure 4 for Three Towers: Flexible Contrastive Learning with Pretrained Image Models
Viaarxiv icon

PaLI-X: On Scaling up a Multilingual Vision and Language Model

Add code
May 29, 2023
Figure 1 for PaLI-X: On Scaling up a Multilingual Vision and Language Model
Figure 2 for PaLI-X: On Scaling up a Multilingual Vision and Language Model
Figure 3 for PaLI-X: On Scaling up a Multilingual Vision and Language Model
Figure 4 for PaLI-X: On Scaling up a Multilingual Vision and Language Model
Viaarxiv icon

Sigmoid Loss for Language Image Pre-Training

Add code
Mar 30, 2023
Figure 1 for Sigmoid Loss for Language Image Pre-Training
Figure 2 for Sigmoid Loss for Language Image Pre-Training
Figure 3 for Sigmoid Loss for Language Image Pre-Training
Figure 4 for Sigmoid Loss for Language Image Pre-Training
Viaarxiv icon

Scaling Vision Transformers to 22 Billion Parameters

Add code
Feb 10, 2023
Figure 1 for Scaling Vision Transformers to 22 Billion Parameters
Figure 2 for Scaling Vision Transformers to 22 Billion Parameters
Figure 3 for Scaling Vision Transformers to 22 Billion Parameters
Figure 4 for Scaling Vision Transformers to 22 Billion Parameters
Viaarxiv icon

Massively Scaling Heteroscedastic Classifiers

Add code
Jan 30, 2023
Figure 1 for Massively Scaling Heteroscedastic Classifiers
Figure 2 for Massively Scaling Heteroscedastic Classifiers
Figure 3 for Massively Scaling Heteroscedastic Classifiers
Figure 4 for Massively Scaling Heteroscedastic Classifiers
Viaarxiv icon

Image-and-Language Understanding from Pixels Only

Add code
Dec 15, 2022
Viaarxiv icon

Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints

Add code
Dec 09, 2022
Figure 1 for Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints
Figure 2 for Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints
Figure 3 for Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints
Figure 4 for Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints
Viaarxiv icon

PaLI: A Jointly-Scaled Multilingual Language-Image Model

Add code
Sep 16, 2022
Figure 1 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Figure 2 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Figure 3 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Figure 4 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Viaarxiv icon

Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts

Add code
Jun 06, 2022
Figure 1 for Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Figure 2 for Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Figure 3 for Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Figure 4 for Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Viaarxiv icon

Robust and Efficient Medical Imaging with Self-Supervision

Add code
May 19, 2022
Figure 1 for Robust and Efficient Medical Imaging with Self-Supervision
Figure 2 for Robust and Efficient Medical Imaging with Self-Supervision
Figure 3 for Robust and Efficient Medical Imaging with Self-Supervision
Figure 4 for Robust and Efficient Medical Imaging with Self-Supervision
Viaarxiv icon