Alert button
Picture for Nan Duan

Nan Duan

Alert button

GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation

Add code
Bookmark button
Alert button
Nov 18, 2022
Biyang Guo, Yeyun Gong, Yelong Shen, Songqiao Han, Hailiang Huang, Nan Duan, Weizhu Chen

Figure 1 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Figure 2 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Figure 3 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Figure 4 for GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation
Viaarxiv icon

Execution-based Evaluation for Data Science Code Generation Models

Add code
Bookmark button
Alert button
Nov 17, 2022
Junjie Huang, Chenglong Wang, Jipeng Zhang, Cong Yan, Haotian Cui, Jeevana Priya Inala, Colin Clement, Nan Duan, Jianfeng Gao

Figure 1 for Execution-based Evaluation for Data Science Code Generation Models
Figure 2 for Execution-based Evaluation for Data Science Code Generation Models
Figure 3 for Execution-based Evaluation for Data Science Code Generation Models
Figure 4 for Execution-based Evaluation for Data Science Code Generation Models
Viaarxiv icon

An Efficient COarse-to-fiNE Alignment Framework @ Ego4D Natural Language Queries Challenge 2022

Add code
Bookmark button
Alert button
Nov 16, 2022
Zhijian Hou, Wanjun Zhong, Lei Ji, Difei Gao, Kun Yan, Wing-Kwong Chan, Chong-Wah Ngo, Zheng Shou, Nan Duan

Figure 1 for An Efficient COarse-to-fiNE Alignment Framework @ Ego4D Natural Language Queries Challenge 2022
Figure 2 for An Efficient COarse-to-fiNE Alignment Framework @ Ego4D Natural Language Queries Challenge 2022
Figure 3 for An Efficient COarse-to-fiNE Alignment Framework @ Ego4D Natural Language Queries Challenge 2022
Figure 4 for An Efficient COarse-to-fiNE Alignment Framework @ Ego4D Natural Language Queries Challenge 2022
Viaarxiv icon

Disentangling Reasoning Capabilities from Language Models with Compositional Reasoning Transformers

Add code
Bookmark button
Alert button
Oct 20, 2022
Wanjun Zhong, Tingting Ma, Jiahai Wang, Jian Yin, Tiejun Zhao, Chin-Yew Lin, Nan Duan

Figure 1 for Disentangling Reasoning Capabilities from Language Models with Compositional Reasoning Transformers
Figure 2 for Disentangling Reasoning Capabilities from Language Models with Compositional Reasoning Transformers
Figure 3 for Disentangling Reasoning Capabilities from Language Models with Compositional Reasoning Transformers
Figure 4 for Disentangling Reasoning Capabilities from Language Models with Compositional Reasoning Transformers
Viaarxiv icon

Sentiment-Aware Word and Sentence Level Pre-training for Sentiment Analysis

Add code
Bookmark button
Alert button
Oct 19, 2022
Shuai Fan, Chen Lin, Haonan Li, Zhenghao Lin, Jinsong Su, Hang Zhang, Yeyun Gong, Jian Guo, Nan Duan

Figure 1 for Sentiment-Aware Word and Sentence Level Pre-training for Sentiment Analysis
Figure 2 for Sentiment-Aware Word and Sentence Level Pre-training for Sentiment Analysis
Figure 3 for Sentiment-Aware Word and Sentence Level Pre-training for Sentiment Analysis
Figure 4 for Sentiment-Aware Word and Sentence Level Pre-training for Sentiment Analysis
Viaarxiv icon

Soft-Labeled Contrastive Pre-training for Function-level Code Representation

Add code
Bookmark button
Alert button
Oct 18, 2022
Xiaonan Li, Daya Guo, Yeyun Gong, Yun Lin, Yelong Shen, Xipeng Qiu, Daxin Jiang, Weizhu Chen, Nan Duan

Figure 1 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 2 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 3 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 4 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Viaarxiv icon

Mixed-modality Representation Learning and Pre-training for Joint Table-and-Text Retrieval in OpenQA

Add code
Bookmark button
Alert button
Oct 11, 2022
Junjie Huang, Wanjun Zhong, Qian Liu, Ming Gong, Daxin Jiang, Nan Duan

Figure 1 for Mixed-modality Representation Learning and Pre-training for Joint Table-and-Text Retrieval in OpenQA
Figure 2 for Mixed-modality Representation Learning and Pre-training for Joint Table-and-Text Retrieval in OpenQA
Figure 3 for Mixed-modality Representation Learning and Pre-training for Joint Table-and-Text Retrieval in OpenQA
Figure 4 for Mixed-modality Representation Learning and Pre-training for Joint Table-and-Text Retrieval in OpenQA
Viaarxiv icon

HORIZON: A High-Resolution Panorama Synthesis Framework

Add code
Bookmark button
Alert button
Oct 10, 2022
Kun Yan, Lei Ji, Chenfei Wu, Jian Liang, Ming Zhou, Nan Duan, Shuai Ma

Figure 1 for HORIZON: A High-Resolution Panorama Synthesis Framework
Figure 2 for HORIZON: A High-Resolution Panorama Synthesis Framework
Figure 3 for HORIZON: A High-Resolution Panorama Synthesis Framework
Figure 4 for HORIZON: A High-Resolution Panorama Synthesis Framework
Viaarxiv icon

PROD: Progressive Distillation for Dense Retrieval

Add code
Bookmark button
Alert button
Sep 27, 2022
Zhenghao Lin, Yeyun Gong, Xiao Liu, Hang Zhang, Chen Lin, Anlei Dong, Jian Jiao, Jingwen Lu, Daxin Jiang, Rangan Majumder, Nan Duan

Figure 1 for PROD: Progressive Distillation for Dense Retrieval
Figure 2 for PROD: Progressive Distillation for Dense Retrieval
Figure 3 for PROD: Progressive Distillation for Dense Retrieval
Figure 4 for PROD: Progressive Distillation for Dense Retrieval
Viaarxiv icon

CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal Grounding

Add code
Bookmark button
Alert button
Sep 22, 2022
Zhijian Hou, Wanjun Zhong, Lei Ji, Difei Gao, Kun Yan, Wing-Kwong Chan, Chong-Wah Ngo, Zheng Shou, Nan Duan

Figure 1 for CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal Grounding
Figure 2 for CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal Grounding
Figure 3 for CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal Grounding
Figure 4 for CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal Grounding
Viaarxiv icon