Picture for Sebastian Goodman

Sebastian Goodman

PaLI-3 Vision Language Models: Smaller, Faster, Stronger

Add code
Oct 17, 2023
Figure 1 for PaLI-3 Vision Language Models: Smaller, Faster, Stronger
Figure 2 for PaLI-3 Vision Language Models: Smaller, Faster, Stronger
Figure 3 for PaLI-3 Vision Language Models: Smaller, Faster, Stronger
Figure 4 for PaLI-3 Vision Language Models: Smaller, Faster, Stronger
Viaarxiv icon

CausalLM is not optimal for in-context learning

Add code
Sep 03, 2023
Figure 1 for CausalLM is not optimal for in-context learning
Figure 2 for CausalLM is not optimal for in-context learning
Figure 3 for CausalLM is not optimal for in-context learning
Figure 4 for CausalLM is not optimal for in-context learning
Viaarxiv icon

PaLI-X: On Scaling up a Multilingual Vision and Language Model

Add code
May 29, 2023
Figure 1 for PaLI-X: On Scaling up a Multilingual Vision and Language Model
Figure 2 for PaLI-X: On Scaling up a Multilingual Vision and Language Model
Figure 3 for PaLI-X: On Scaling up a Multilingual Vision and Language Model
Figure 4 for PaLI-X: On Scaling up a Multilingual Vision and Language Model
Viaarxiv icon

PaLI: A Jointly-Scaled Multilingual Language-Image Model

Add code
Sep 16, 2022
Figure 1 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Figure 2 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Figure 3 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Figure 4 for PaLI: A Jointly-Scaled Multilingual Language-Image Model
Viaarxiv icon

PreSTU: Pre-Training for Scene-Text Understanding

Add code
Sep 12, 2022
Figure 1 for PreSTU: Pre-Training for Scene-Text Understanding
Figure 2 for PreSTU: Pre-Training for Scene-Text Understanding
Figure 3 for PreSTU: Pre-Training for Scene-Text Understanding
Figure 4 for PreSTU: Pre-Training for Scene-Text Understanding
Viaarxiv icon

Scaling Up Models and Data with $\texttt{t5x}$ and $\texttt{seqio}$

Add code
Mar 31, 2022
Figure 1 for Scaling Up Models and Data with $\texttt{t5x}$ and $\texttt{seqio}$
Figure 2 for Scaling Up Models and Data with $\texttt{t5x}$ and $\texttt{seqio}$
Viaarxiv icon

Bridging the Gap Between Practice and PAC-Bayes Theory in Few-Shot Meta-Learning

Add code
May 28, 2021
Figure 1 for Bridging the Gap Between Practice and PAC-Bayes Theory in Few-Shot Meta-Learning
Figure 2 for Bridging the Gap Between Practice and PAC-Bayes Theory in Few-Shot Meta-Learning
Figure 3 for Bridging the Gap Between Practice and PAC-Bayes Theory in Few-Shot Meta-Learning
Figure 4 for Bridging the Gap Between Practice and PAC-Bayes Theory in Few-Shot Meta-Learning
Viaarxiv icon

TeaForN: Teacher-Forcing with N-grams

Add code
Oct 09, 2020
Figure 1 for TeaForN: Teacher-Forcing with N-grams
Figure 2 for TeaForN: Teacher-Forcing with N-grams
Figure 3 for TeaForN: Teacher-Forcing with N-grams
Figure 4 for TeaForN: Teacher-Forcing with N-grams
Viaarxiv icon

Multi-Image Summarization: Textual Summary from a Set of Cohesive Images

Add code
Jun 15, 2020
Figure 1 for Multi-Image Summarization: Textual Summary from a Set of Cohesive Images
Figure 2 for Multi-Image Summarization: Textual Summary from a Set of Cohesive Images
Figure 3 for Multi-Image Summarization: Textual Summary from a Set of Cohesive Images
Figure 4 for Multi-Image Summarization: Textual Summary from a Set of Cohesive Images
Viaarxiv icon

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

Add code
Oct 30, 2019
Figure 1 for ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Figure 2 for ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Figure 3 for ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Figure 4 for ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Viaarxiv icon