Alert button
Picture for Zhilin Yang

Zhilin Yang

Alert button

FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning

Aug 13, 2021
Jing Zhou, Yanan Zheng, Jie Tang, Jian Li, Zhilin Yang

Figure 1 for FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning
Figure 2 for FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning
Figure 3 for FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning
Figure 4 for FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning
Viaarxiv icon

Distribution Matching for Rationalization

Jun 01, 2021
Yongfeng Huang, Yujun Chen, Yulun Du, Zhilin Yang

Figure 1 for Distribution Matching for Rationalization
Figure 2 for Distribution Matching for Rationalization
Figure 3 for Distribution Matching for Rationalization
Figure 4 for Distribution Matching for Rationalization
Viaarxiv icon

VeniBot: Towards Autonomous Venipuncture with Automatic Puncture Area and Angle Regression from NIR Images

May 27, 2021
Xu Cao, Zijie Chen, Bolin Lai, Yuxuan Wang, Yu Chen, Zhengqing Cao, Zhilin Yang, Nanyang Ye, Junbo Zhao, Xiao-Yun Zhou, Peng Qi

Figure 1 for VeniBot: Towards Autonomous Venipuncture with Automatic Puncture Area and Angle Regression from NIR Images
Figure 2 for VeniBot: Towards Autonomous Venipuncture with Automatic Puncture Area and Angle Regression from NIR Images
Figure 3 for VeniBot: Towards Autonomous Venipuncture with Automatic Puncture Area and Angle Regression from NIR Images
Figure 4 for VeniBot: Towards Autonomous Venipuncture with Automatic Puncture Area and Angle Regression from NIR Images
Viaarxiv icon

FastMoE: A Fast Mixture-of-Expert Training System

Mar 24, 2021
Jiaao He, Jiezhong Qiu, Aohan Zeng, Zhilin Yang, Jidong Zhai, Jie Tang

Figure 1 for FastMoE: A Fast Mixture-of-Expert Training System
Figure 2 for FastMoE: A Fast Mixture-of-Expert Training System
Figure 3 for FastMoE: A Fast Mixture-of-Expert Training System
Figure 4 for FastMoE: A Fast Mixture-of-Expert Training System
Viaarxiv icon

Controllable Generation from Pre-trained Language Models via Inverse Prompting

Mar 19, 2021
Xu Zou, Da Yin, Qingyang Zhong, Hongxia Yang, Zhilin Yang, Jie Tang

Figure 1 for Controllable Generation from Pre-trained Language Models via Inverse Prompting
Figure 2 for Controllable Generation from Pre-trained Language Models via Inverse Prompting
Figure 3 for Controllable Generation from Pre-trained Language Models via Inverse Prompting
Figure 4 for Controllable Generation from Pre-trained Language Models via Inverse Prompting
Viaarxiv icon

GPT Understands, Too

Mar 18, 2021
Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, Jie Tang

Figure 1 for GPT Understands, Too
Figure 2 for GPT Understands, Too
Figure 3 for GPT Understands, Too
Figure 4 for GPT Understands, Too
Viaarxiv icon

All NLP Tasks Are Generation Tasks: A General Pretraining Framework

Mar 18, 2021
Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, Jie Tang

Figure 1 for All NLP Tasks Are Generation Tasks: A General Pretraining Framework
Figure 2 for All NLP Tasks Are Generation Tasks: A General Pretraining Framework
Figure 3 for All NLP Tasks Are Generation Tasks: A General Pretraining Framework
Figure 4 for All NLP Tasks Are Generation Tasks: A General Pretraining Framework
Viaarxiv icon

XLNet: Generalized Autoregressive Pretraining for Language Understanding

Jun 19, 2019
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le

Figure 1 for XLNet: Generalized Autoregressive Pretraining for Language Understanding
Figure 2 for XLNet: Generalized Autoregressive Pretraining for Language Understanding
Figure 3 for XLNet: Generalized Autoregressive Pretraining for Language Understanding
Figure 4 for XLNet: Generalized Autoregressive Pretraining for Language Understanding
Viaarxiv icon

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

Jan 18, 2019
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov

Figure 1 for Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Figure 2 for Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Figure 3 for Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Figure 4 for Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Viaarxiv icon

HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering

Sep 25, 2018
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, Christopher D. Manning

Figure 1 for HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
Figure 2 for HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
Figure 3 for HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
Figure 4 for HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
Viaarxiv icon