Alert button
Picture for Weizhu Chen

Weizhu Chen

Alert button

Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models

Add code
Bookmark button
Alert button
Feb 01, 2023
Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, Weizhu Chen

Figure 1 for Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models
Figure 2 for Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models
Figure 3 for Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models
Figure 4 for Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models
Viaarxiv icon

GENIE: Large Scale Pre-training for Text Generation with Diffusion Model

Add code
Bookmark button
Alert button
Dec 22, 2022
Zhenghao Lin, Yeyun Gong, Yelong Shen, Tong Wu, Zhihao Fan, Chen Lin, Weizhu Chen, Nan Duan

Figure 1 for GENIE: Large Scale Pre-training for Text Generation with Diffusion Model
Figure 2 for GENIE: Large Scale Pre-training for Text Generation with Diffusion Model
Figure 3 for GENIE: Large Scale Pre-training for Text Generation with Diffusion Model
Figure 4 for GENIE: Large Scale Pre-training for Text Generation with Diffusion Model
Viaarxiv icon

Generation-Augmented Query Expansion For Code Retrieval

Add code
Bookmark button
Alert button
Dec 20, 2022
Dong Li, Yelong Shen, Ruoming Jin, Yi Mao, Kuan Wang, Weizhu Chen

Figure 1 for Generation-Augmented Query Expansion For Code Retrieval
Figure 2 for Generation-Augmented Query Expansion For Code Retrieval
Figure 3 for Generation-Augmented Query Expansion For Code Retrieval
Figure 4 for Generation-Augmented Query Expansion For Code Retrieval
Viaarxiv icon

HyperTuning: Toward Adapting Large Language Models without Back-propagation

Add code
Bookmark button
Alert button
Nov 22, 2022
Jason Phang, Yi Mao, Pengcheng He, Weizhu Chen

Figure 1 for HyperTuning: Toward Adapting Large Language Models without Back-propagation
Figure 2 for HyperTuning: Toward Adapting Large Language Models without Back-propagation
Figure 3 for HyperTuning: Toward Adapting Large Language Models without Back-propagation
Figure 4 for HyperTuning: Toward Adapting Large Language Models without Back-propagation
Viaarxiv icon

GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation

Add code
Bookmark button
Alert button
Nov 18, 2022
Biyang Guo, Yeyun Gong, Yelong Shen, Songqiao Han, Hailiang Huang, Nan Duan, Weizhu Chen

Figure 1 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Figure 2 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Figure 3 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Figure 4 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Viaarxiv icon

Soft-Labeled Contrastive Pre-training for Function-level Code Representation

Add code
Bookmark button
Alert button
Oct 18, 2022
Xiaonan Li, Daya Guo, Yeyun Gong, Yun Lin, Yelong Shen, Xipeng Qiu, Daxin Jiang, Weizhu Chen, Nan Duan

Figure 1 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 2 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 3 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 4 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Viaarxiv icon

Less is More: Task-aware Layer-wise Distillation for Language Model Compression

Add code
Bookmark button
Alert button
Oct 05, 2022
Chen Liang, Simiao Zuo, Qingru Zhang, Pengcheng He, Weizhu Chen, Tuo Zhao

Figure 1 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Figure 2 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Figure 3 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Figure 4 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Viaarxiv icon

CodeT: Code Generation with Generated Tests

Add code
Bookmark button
Alert button
Jul 21, 2022
Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, Weizhu Chen

Figure 1 for CodeT: Code Generation with Generated Tests
Figure 2 for CodeT: Code Generation with Generated Tests
Figure 3 for CodeT: Code Generation with Generated Tests
Figure 4 for CodeT: Code Generation with Generated Tests
Viaarxiv icon

OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering

Add code
Bookmark button
Alert button
Jul 08, 2022
Zhengbao Jiang, Yi Mao, Pengcheng He, Graham Neubig, Weizhu Chen

Figure 1 for OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering
Figure 2 for OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering
Figure 3 for OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering
Figure 4 for OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering
Viaarxiv icon

Joint Generator-Ranker Learning for Natural Language Generation

Add code
Bookmark button
Alert button
Jun 28, 2022
Weizhou Shen, Yeyun Gong, Yelong Shen, Song Wang, Xiaojun Quan, Nan Duan, Weizhu Chen

Figure 1 for Joint Generator-Ranker Learning for Natural Language Generation
Figure 2 for Joint Generator-Ranker Learning for Natural Language Generation
Figure 3 for Joint Generator-Ranker Learning for Natural Language Generation
Figure 4 for Joint Generator-Ranker Learning for Natural Language Generation
Viaarxiv icon