Picture for Alexander Kolesnikov

Alexander Kolesnikov

PaliGemma: A versatile 3B VLM for transfer

Add code
Jul 10, 2024
Figure 1 for PaliGemma: A versatile 3B VLM for transfer
Figure 2 for PaliGemma: A versatile 3B VLM for transfer
Figure 3 for PaliGemma: A versatile 3B VLM for transfer
Figure 4 for PaliGemma: A versatile 3B VLM for transfer
Viaarxiv icon

Toward a Diffusion-Based Generalist for Dense Vision Tasks

Add code
Jun 29, 2024
Figure 1 for Toward a Diffusion-Based Generalist for Dense Vision Tasks
Figure 2 for Toward a Diffusion-Based Generalist for Dense Vision Tasks
Figure 3 for Toward a Diffusion-Based Generalist for Dense Vision Tasks
Figure 4 for Toward a Diffusion-Based Generalist for Dense Vision Tasks
Viaarxiv icon

PaLI-3 Vision Language Models: Smaller, Faster, Stronger

Add code
Oct 17, 2023
Figure 1 for PaLI-3 Vision Language Models: Smaller, Faster, Stronger
Figure 2 for PaLI-3 Vision Language Models: Smaller, Faster, Stronger
Figure 3 for PaLI-3 Vision Language Models: Smaller, Faster, Stronger
Figure 4 for PaLI-3 Vision Language Models: Smaller, Faster, Stronger
Viaarxiv icon

PaLI-X: On Scaling up a Multilingual Vision and Language Model

Add code
May 29, 2023
Figure 1 for PaLI-X: On Scaling up a Multilingual Vision and Language Model
Figure 2 for PaLI-X: On Scaling up a Multilingual Vision and Language Model
Figure 3 for PaLI-X: On Scaling up a Multilingual Vision and Language Model
Figure 4 for PaLI-X: On Scaling up a Multilingual Vision and Language Model
Viaarxiv icon

Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design

Add code
May 22, 2023
Figure 1 for Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design
Figure 2 for Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design
Figure 3 for Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design
Figure 4 for Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design
Viaarxiv icon

Capturing dynamical correlations using implicit neural representations

Add code
Apr 08, 2023
Figure 1 for Capturing dynamical correlations using implicit neural representations
Figure 2 for Capturing dynamical correlations using implicit neural representations
Figure 3 for Capturing dynamical correlations using implicit neural representations
Figure 4 for Capturing dynamical correlations using implicit neural representations
Viaarxiv icon

Sigmoid Loss for Language Image Pre-Training

Add code
Mar 30, 2023
Figure 1 for Sigmoid Loss for Language Image Pre-Training
Figure 2 for Sigmoid Loss for Language Image Pre-Training
Figure 3 for Sigmoid Loss for Language Image Pre-Training
Figure 4 for Sigmoid Loss for Language Image Pre-Training
Viaarxiv icon

A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision

Add code
Mar 30, 2023
Figure 1 for A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision
Figure 2 for A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision
Figure 3 for A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision
Figure 4 for A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision
Viaarxiv icon

Tuning computer vision models with task rewards

Add code
Feb 16, 2023
Figure 1 for Tuning computer vision models with task rewards
Figure 2 for Tuning computer vision models with task rewards
Figure 3 for Tuning computer vision models with task rewards
Figure 4 for Tuning computer vision models with task rewards
Viaarxiv icon

Scaling Vision Transformers to 22 Billion Parameters

Add code
Feb 10, 2023
Figure 1 for Scaling Vision Transformers to 22 Billion Parameters
Figure 2 for Scaling Vision Transformers to 22 Billion Parameters
Figure 3 for Scaling Vision Transformers to 22 Billion Parameters
Figure 4 for Scaling Vision Transformers to 22 Billion Parameters
Viaarxiv icon