Alert button
Picture for Yeyun Gong

Yeyun Gong

Alert button

AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators

Add code
Bookmark button
Alert button
Mar 29, 2023
Xingwei He, Zhenghao Lin, Yeyun Gong, A-Long Jin, Hang Zhang, Chen Lin, Jian Jiao, Siu Ming Yiu, Nan Duan, Weizhu Chen

Figure 1 for AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators
Figure 2 for AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators
Figure 3 for AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators
Figure 4 for AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators
Viaarxiv icon

Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models

Add code
Bookmark button
Alert button
Feb 01, 2023
Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, Weizhu Chen

Figure 1 for Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models
Figure 2 for Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models
Figure 3 for Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models
Figure 4 for Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models
Viaarxiv icon

GENIE: Large Scale Pre-training for Text Generation with Diffusion Model

Add code
Bookmark button
Alert button
Dec 22, 2022
Zhenghao Lin, Yeyun Gong, Yelong Shen, Tong Wu, Zhihao Fan, Chen Lin, Weizhu Chen, Nan Duan

Figure 1 for GENIE: Large Scale Pre-training for Text Generation with Diffusion Model
Figure 2 for GENIE: Large Scale Pre-training for Text Generation with Diffusion Model
Figure 3 for GENIE: Large Scale Pre-training for Text Generation with Diffusion Model
Figure 4 for GENIE: Large Scale Pre-training for Text Generation with Diffusion Model
Viaarxiv icon

Curriculum Sampling for Dense Retrieval with Document Expansion

Add code
Bookmark button
Alert button
Dec 18, 2022
Xingwei He, Yeyun Gong, A-Long Jin, Hang Zhang, Anlei Dong, Jian Jiao, Siu Ming Yiu, Nan Duan

Figure 1 for Curriculum Sampling for Dense Retrieval with Document Expansion
Figure 2 for Curriculum Sampling for Dense Retrieval with Document Expansion
Figure 3 for Curriculum Sampling for Dense Retrieval with Document Expansion
Figure 4 for Curriculum Sampling for Dense Retrieval with Document Expansion
Viaarxiv icon

MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers

Add code
Bookmark button
Alert button
Dec 15, 2022
Kun Zhou, Xiao Liu, Yeyun Gong, Wayne Xin Zhao, Daxin Jiang, Nan Duan, Ji-Rong Wen

Figure 1 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Figure 2 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Figure 3 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Figure 4 for MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders are Better Dense Retrievers
Viaarxiv icon

APOLLO: An Optimized Training Approach for Long-form Numerical Reasoning

Add code
Bookmark button
Alert button
Dec 14, 2022
Jiashuo Sun, Hang Zhang, Chen Lin, Yeyun Gong, Jian Guo, Nan Duan

Figure 1 for APOLLO: An Optimized Training Approach for Long-form Numerical Reasoning
Figure 2 for APOLLO: An Optimized Training Approach for Long-form Numerical Reasoning
Figure 3 for APOLLO: An Optimized Training Approach for Long-form Numerical Reasoning
Figure 4 for APOLLO: An Optimized Training Approach for Long-form Numerical Reasoning
Viaarxiv icon

LEAD: Liberal Feature-based Distillation for Dense Retrieval

Add code
Bookmark button
Alert button
Dec 10, 2022
Hao Sun, Xiao Liu, Yeyun Gong, Anlei Dong, Jian Jiao, Jingwen Lu, Yan Zhang, Daxin Jiang, Linjun Yang, Rangan Majumder, Nan Duan

Figure 1 for LEAD: Liberal Feature-based Distillation for Dense Retrieval
Figure 2 for LEAD: Liberal Feature-based Distillation for Dense Retrieval
Figure 3 for LEAD: Liberal Feature-based Distillation for Dense Retrieval
Figure 4 for LEAD: Liberal Feature-based Distillation for Dense Retrieval
Viaarxiv icon

GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation

Add code
Bookmark button
Alert button
Nov 18, 2022
Biyang Guo, Yeyun Gong, Yelong Shen, Songqiao Han, Hailiang Huang, Nan Duan, Weizhu Chen

Figure 1 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Figure 2 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Figure 3 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Figure 4 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Viaarxiv icon

Sentiment-Aware Word and Sentence Level Pre-training for Sentiment Analysis

Add code
Bookmark button
Alert button
Oct 19, 2022
Shuai Fan, Chen Lin, Haonan Li, Zhenghao Lin, Jinsong Su, Hang Zhang, Yeyun Gong, Jian Guo, Nan Duan

Figure 1 for Sentiment-Aware Word and Sentence Level Pre-training for Sentiment Analysis
Figure 2 for Sentiment-Aware Word and Sentence Level Pre-training for Sentiment Analysis
Figure 3 for Sentiment-Aware Word and Sentence Level Pre-training for Sentiment Analysis
Figure 4 for Sentiment-Aware Word and Sentence Level Pre-training for Sentiment Analysis
Viaarxiv icon

Soft-Labeled Contrastive Pre-training for Function-level Code Representation

Add code
Bookmark button
Alert button
Oct 18, 2022
Xiaonan Li, Daya Guo, Yeyun Gong, Yun Lin, Yelong Shen, Xipeng Qiu, Daxin Jiang, Weizhu Chen, Nan Duan

Figure 1 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 2 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 3 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 4 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Viaarxiv icon