Picture for Zhilin Yang

Zhilin Yang

FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding

Add code
Sep 27, 2021
Figure 1 for FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Figure 2 for FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Figure 3 for FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Figure 4 for FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Viaarxiv icon

FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning

Add code
Aug 13, 2021
Figure 1 for FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning
Figure 2 for FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning
Figure 3 for FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning
Figure 4 for FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning
Viaarxiv icon

Distribution Matching for Rationalization

Add code
Jun 01, 2021
Figure 1 for Distribution Matching for Rationalization
Figure 2 for Distribution Matching for Rationalization
Figure 3 for Distribution Matching for Rationalization
Figure 4 for Distribution Matching for Rationalization
Viaarxiv icon

VeniBot: Towards Autonomous Venipuncture with Automatic Puncture Area and Angle Regression from NIR Images

Add code
May 27, 2021
Figure 1 for VeniBot: Towards Autonomous Venipuncture with Automatic Puncture Area and Angle Regression from NIR Images
Figure 2 for VeniBot: Towards Autonomous Venipuncture with Automatic Puncture Area and Angle Regression from NIR Images
Figure 3 for VeniBot: Towards Autonomous Venipuncture with Automatic Puncture Area and Angle Regression from NIR Images
Figure 4 for VeniBot: Towards Autonomous Venipuncture with Automatic Puncture Area and Angle Regression from NIR Images
Viaarxiv icon

FastMoE: A Fast Mixture-of-Expert Training System

Add code
Mar 24, 2021
Figure 1 for FastMoE: A Fast Mixture-of-Expert Training System
Figure 2 for FastMoE: A Fast Mixture-of-Expert Training System
Figure 3 for FastMoE: A Fast Mixture-of-Expert Training System
Figure 4 for FastMoE: A Fast Mixture-of-Expert Training System
Viaarxiv icon

Controllable Generation from Pre-trained Language Models via Inverse Prompting

Add code
Mar 19, 2021
Figure 1 for Controllable Generation from Pre-trained Language Models via Inverse Prompting
Figure 2 for Controllable Generation from Pre-trained Language Models via Inverse Prompting
Figure 3 for Controllable Generation from Pre-trained Language Models via Inverse Prompting
Figure 4 for Controllable Generation from Pre-trained Language Models via Inverse Prompting
Viaarxiv icon

GPT Understands, Too

Add code
Mar 18, 2021
Figure 1 for GPT Understands, Too
Figure 2 for GPT Understands, Too
Figure 3 for GPT Understands, Too
Figure 4 for GPT Understands, Too
Viaarxiv icon

All NLP Tasks Are Generation Tasks: A General Pretraining Framework

Add code
Mar 18, 2021
Figure 1 for All NLP Tasks Are Generation Tasks: A General Pretraining Framework
Figure 2 for All NLP Tasks Are Generation Tasks: A General Pretraining Framework
Figure 3 for All NLP Tasks Are Generation Tasks: A General Pretraining Framework
Figure 4 for All NLP Tasks Are Generation Tasks: A General Pretraining Framework
Viaarxiv icon

XLNet: Generalized Autoregressive Pretraining for Language Understanding

Add code
Jun 19, 2019
Figure 1 for XLNet: Generalized Autoregressive Pretraining for Language Understanding
Figure 2 for XLNet: Generalized Autoregressive Pretraining for Language Understanding
Figure 3 for XLNet: Generalized Autoregressive Pretraining for Language Understanding
Figure 4 for XLNet: Generalized Autoregressive Pretraining for Language Understanding
Viaarxiv icon

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

Add code
Jan 18, 2019
Figure 1 for Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Figure 2 for Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Figure 3 for Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Figure 4 for Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Viaarxiv icon