Picture for Michael Zeng

Michael Zeng

Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data

Add code
Mar 16, 2022
Figure 1 for Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data
Figure 2 for Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data
Figure 3 for Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data
Figure 4 for Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data
Viaarxiv icon

AdaPrompt: Adaptive Model Training for Prompt-based NLP

Add code
Feb 10, 2022
Figure 1 for AdaPrompt: Adaptive Model Training for Prompt-based NLP
Figure 2 for AdaPrompt: Adaptive Model Training for Prompt-based NLP
Figure 3 for AdaPrompt: Adaptive Model Training for Prompt-based NLP
Figure 4 for AdaPrompt: Adaptive Model Training for Prompt-based NLP
Viaarxiv icon

Unsupervised Summarization with Customized Granularities

Add code
Jan 29, 2022
Figure 1 for Unsupervised Summarization with Customized Granularities
Figure 2 for Unsupervised Summarization with Customized Granularities
Figure 3 for Unsupervised Summarization with Customized Granularities
Figure 4 for Unsupervised Summarization with Customized Granularities
Viaarxiv icon

CLIP-Event: Connecting Text and Images with Event Structures

Add code
Jan 13, 2022
Figure 1 for CLIP-Event: Connecting Text and Images with Event Structures
Figure 2 for CLIP-Event: Connecting Text and Images with Event Structures
Figure 3 for CLIP-Event: Connecting Text and Images with Event Structures
Figure 4 for CLIP-Event: Connecting Text and Images with Event Structures
Viaarxiv icon

Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention

Add code
Dec 14, 2021
Figure 1 for Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention
Figure 2 for Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention
Figure 3 for Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention
Figure 4 for Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention
Viaarxiv icon

Sequence-level self-learning with multiple hypotheses

Add code
Dec 10, 2021
Figure 1 for Sequence-level self-learning with multiple hypotheses
Figure 2 for Sequence-level self-learning with multiple hypotheses
Figure 3 for Sequence-level self-learning with multiple hypotheses
Figure 4 for Sequence-level self-learning with multiple hypotheses
Viaarxiv icon

MLP Architectures for Vision-and-Language Modeling: An Empirical Study

Add code
Dec 08, 2021
Figure 1 for MLP Architectures for Vision-and-Language Modeling: An Empirical Study
Figure 2 for MLP Architectures for Vision-and-Language Modeling: An Empirical Study
Figure 3 for MLP Architectures for Vision-and-Language Modeling: An Empirical Study
Figure 4 for MLP Architectures for Vision-and-Language Modeling: An Empirical Study
Viaarxiv icon

An Empirical Study of Training End-to-End Vision-and-Language Transformers

Add code
Nov 25, 2021
Figure 1 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 2 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 3 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 4 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Viaarxiv icon

Florence: A New Foundation Model for Computer Vision

Add code
Nov 22, 2021
Figure 1 for Florence: A New Foundation Model for Computer Vision
Figure 2 for Florence: A New Foundation Model for Computer Vision
Figure 3 for Florence: A New Foundation Model for Computer Vision
Figure 4 for Florence: A New Foundation Model for Computer Vision
Viaarxiv icon

WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing

Add code
Oct 29, 2021
Figure 1 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 2 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 3 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 4 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Viaarxiv icon