Picture for Jianfeng Gao

Jianfeng Gao

EJ

Learning from Self-Sampled Correct and Partially-Correct Programs

Add code
May 28, 2022
Figure 1 for Learning from Self-Sampled Correct and Partially-Correct Programs
Figure 2 for Learning from Self-Sampled Correct and Partially-Correct Programs
Figure 3 for Learning from Self-Sampled Correct and Partially-Correct Programs
Figure 4 for Learning from Self-Sampled Correct and Partially-Correct Programs
Viaarxiv icon

AdaMix: Mixture-of-Adapter for Parameter-efficient Tuning of Large Language Models

Add code
May 24, 2022
Figure 1 for AdaMix: Mixture-of-Adapter for Parameter-efficient Tuning of Large Language Models
Figure 2 for AdaMix: Mixture-of-Adapter for Parameter-efficient Tuning of Large Language Models
Figure 3 for AdaMix: Mixture-of-Adapter for Parameter-efficient Tuning of Large Language Models
Figure 4 for AdaMix: Mixture-of-Adapter for Parameter-efficient Tuning of Large Language Models
Viaarxiv icon

Visually-Augmented Language Modeling

Add code
May 20, 2022
Figure 1 for Visually-Augmented Language Modeling
Figure 2 for Visually-Augmented Language Modeling
Figure 3 for Visually-Augmented Language Modeling
Figure 4 for Visually-Augmented Language Modeling
Viaarxiv icon

Training Vision-Language Transformers from Captions Alone

Add code
May 19, 2022
Figure 1 for Training Vision-Language Transformers from Captions Alone
Figure 2 for Training Vision-Language Transformers from Captions Alone
Figure 3 for Training Vision-Language Transformers from Captions Alone
Figure 4 for Training Vision-Language Transformers from Captions Alone
Viaarxiv icon

Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems

Add code
May 02, 2022
Figure 1 for Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems
Figure 2 for Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems
Figure 3 for Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems
Figure 4 for Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems
Viaarxiv icon

ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models

Add code
Apr 20, 2022
Figure 1 for ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models
Figure 2 for ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models
Figure 3 for ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models
Figure 4 for ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models
Viaarxiv icon

K-LITE: Learning Transferable Visual Models with External Knowledge

Add code
Apr 20, 2022
Figure 1 for K-LITE: Learning Transferable Visual Models with External Knowledge
Figure 2 for K-LITE: Learning Transferable Visual Models with External Knowledge
Figure 3 for K-LITE: Learning Transferable Visual Models with External Knowledge
Figure 4 for K-LITE: Learning Transferable Visual Models with External Knowledge
Viaarxiv icon

METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals

Add code
Apr 16, 2022
Figure 1 for METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals
Figure 2 for METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals
Figure 3 for METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals
Figure 4 for METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals
Viaarxiv icon

Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners

Add code
Apr 16, 2022
Figure 1 for Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners
Figure 2 for Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners
Figure 3 for Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners
Figure 4 for Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners
Viaarxiv icon

Unified Contrastive Learning in Image-Text-Label Space

Add code
Apr 07, 2022
Figure 1 for Unified Contrastive Learning in Image-Text-Label Space
Figure 2 for Unified Contrastive Learning in Image-Text-Label Space
Figure 3 for Unified Contrastive Learning in Image-Text-Label Space
Figure 4 for Unified Contrastive Learning in Image-Text-Label Space
Viaarxiv icon