Alert button
Picture for Yelong Shen

Yelong Shen

Alert button

GENIE: Large Scale Pre-training for Text Generation with Diffusion Model

Add code
Bookmark button
Alert button
Dec 22, 2022
Zhenghao Lin, Yeyun Gong, Yelong Shen, Tong Wu, Zhihao Fan, Chen Lin, Weizhu Chen, Nan Duan

Figure 1 for GENIE: Large Scale Pre-training for Text Generation with Diffusion Model
Figure 2 for GENIE: Large Scale Pre-training for Text Generation with Diffusion Model
Figure 3 for GENIE: Large Scale Pre-training for Text Generation with Diffusion Model
Figure 4 for GENIE: Large Scale Pre-training for Text Generation with Diffusion Model
Viaarxiv icon

Generation-Augmented Query Expansion For Code Retrieval

Add code
Bookmark button
Alert button
Dec 20, 2022
Dong Li, Yelong Shen, Ruoming Jin, Yi Mao, Kuan Wang, Weizhu Chen

Figure 1 for Generation-Augmented Query Expansion For Code Retrieval
Figure 2 for Generation-Augmented Query Expansion For Code Retrieval
Figure 3 for Generation-Augmented Query Expansion For Code Retrieval
Figure 4 for Generation-Augmented Query Expansion For Code Retrieval
Viaarxiv icon

GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation

Add code
Bookmark button
Alert button
Nov 18, 2022
Biyang Guo, Yeyun Gong, Yelong Shen, Songqiao Han, Hailiang Huang, Nan Duan, Weizhu Chen

Figure 1 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Figure 2 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Figure 3 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Figure 4 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Viaarxiv icon

Soft-Labeled Contrastive Pre-training for Function-level Code Representation

Add code
Bookmark button
Alert button
Oct 18, 2022
Xiaonan Li, Daya Guo, Yeyun Gong, Yun Lin, Yelong Shen, Xipeng Qiu, Daxin Jiang, Weizhu Chen, Nan Duan

Figure 1 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 2 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 3 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 4 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Viaarxiv icon

Explanations from Large Language Models Make Small Reasoners Better

Add code
Bookmark button
Alert button
Oct 13, 2022
Shiyang Li, Jianshu Chen, Yelong Shen, Zhiyu Chen, Xinlu Zhang, Zekun Li, Hong Wang, Jing Qian, Baolin Peng, Yi Mao, Wenhu Chen, Xifeng Yan

Figure 1 for Explanations from Large Language Models Make Small Reasoners Better
Figure 2 for Explanations from Large Language Models Make Small Reasoners Better
Figure 3 for Explanations from Large Language Models Make Small Reasoners Better
Figure 4 for Explanations from Large Language Models Make Small Reasoners Better
Viaarxiv icon

Joint Generator-Ranker Learning for Natural Language Generation

Add code
Bookmark button
Alert button
Jun 28, 2022
Weizhou Shen, Yeyun Gong, Yelong Shen, Song Wang, Xiaojun Quan, Nan Duan, Weizhu Chen

Figure 1 for Joint Generator-Ranker Learning for Natural Language Generation
Figure 2 for Joint Generator-Ranker Learning for Natural Language Generation
Figure 3 for Joint Generator-Ranker Learning for Natural Language Generation
Figure 4 for Joint Generator-Ranker Learning for Natural Language Generation
Viaarxiv icon

A Self-Paced Mixed Distillation Method for Non-Autoregressive Generation

Add code
Bookmark button
Alert button
May 23, 2022
Weizhen Qi, Yeyun Gong, Yelong Shen, Jian Jiao, Yu Yan, Houqiang Li, Ruofei Zhang, Weizhu Chen, Nan Duan

Figure 1 for A Self-Paced Mixed Distillation Method for Non-Autoregressive Generation
Figure 2 for A Self-Paced Mixed Distillation Method for Non-Autoregressive Generation
Figure 3 for A Self-Paced Mixed Distillation Method for Non-Autoregressive Generation
Figure 4 for A Self-Paced Mixed Distillation Method for Non-Autoregressive Generation
Viaarxiv icon

CAMERO: Consistency Regularized Ensemble of Perturbed Language Models with Weight Sharing

Add code
Bookmark button
Alert button
Apr 18, 2022
Chen Liang, Pengcheng He, Yelong Shen, Weizhu Chen, Tuo Zhao

Figure 1 for CAMERO: Consistency Regularized Ensemble of Perturbed Language Models with Weight Sharing
Figure 2 for CAMERO: Consistency Regularized Ensemble of Perturbed Language Models with Weight Sharing
Figure 3 for CAMERO: Consistency Regularized Ensemble of Perturbed Language Models with Weight Sharing
Figure 4 for CAMERO: Consistency Regularized Ensemble of Perturbed Language Models with Weight Sharing
Viaarxiv icon

Controllable Natural Language Generation with Contrastive Prefixes

Add code
Bookmark button
Alert button
Feb 27, 2022
Jing Qian, Li Dong, Yelong Shen, Furu Wei, Weizhu Chen

Figure 1 for Controllable Natural Language Generation with Contrastive Prefixes
Figure 2 for Controllable Natural Language Generation with Contrastive Prefixes
Figure 3 for Controllable Natural Language Generation with Contrastive Prefixes
Figure 4 for Controllable Natural Language Generation with Contrastive Prefixes
Viaarxiv icon