Alert button
Picture for Nan Duan

Nan Duan

Alert button

VL-InterpreT: An Interactive Visualization Tool for Interpreting Vision-Language Transformers

Add code
Bookmark button
Alert button
Mar 30, 2022
Estelle Aflalo, Meng Du, Shao-Yen Tseng, Yongfei Liu, Chenfei Wu, Nan Duan, Vasudev Lal

Figure 1 for VL-InterpreT: An Interactive Visualization Tool for Interpreting Vision-Language Transformers
Figure 2 for VL-InterpreT: An Interactive Visualization Tool for Interpreting Vision-Language Transformers
Figure 3 for VL-InterpreT: An Interactive Visualization Tool for Interpreting Vision-Language Transformers
Figure 4 for VL-InterpreT: An Interactive Visualization Tool for Interpreting Vision-Language Transformers
Viaarxiv icon

CodeReviewer: Pre-Training for Automating Code Review Activities

Add code
Bookmark button
Alert button
Mar 17, 2022
Zhiyu Li, Shuai Lu, Daya Guo, Nan Duan, Shailesh Jannu, Grant Jenks, Deep Majumder, Jared Green, Alexey Svyatkovskiy, Shengyu Fu, Neel Sundaresan

Figure 1 for CodeReviewer: Pre-Training for Automating Code Review Activities
Figure 2 for CodeReviewer: Pre-Training for Automating Code Review Activities
Figure 3 for CodeReviewer: Pre-Training for Automating Code Review Activities
Figure 4 for CodeReviewer: Pre-Training for Automating Code Review Activities
Viaarxiv icon

Cross-Lingual Ability of Multilingual Masked Language Models: A Study of Language Structure

Add code
Bookmark button
Alert button
Mar 16, 2022
Yuan Chai, Yaobo Liang, Nan Duan

Figure 1 for Cross-Lingual Ability of Multilingual Masked Language Models: A Study of Language Structure
Figure 2 for Cross-Lingual Ability of Multilingual Masked Language Models: A Study of Language Structure
Figure 3 for Cross-Lingual Ability of Multilingual Masked Language Models: A Study of Language Structure
Figure 4 for Cross-Lingual Ability of Multilingual Masked Language Models: A Study of Language Structure
Viaarxiv icon

Multi-View Document Representation Learning for Open-Domain Dense Retrieval

Add code
Bookmark button
Alert button
Mar 16, 2022
Shunyu Zhang, Yaobo Liang, Ming Gong, Daxin Jiang, Nan Duan

Figure 1 for Multi-View Document Representation Learning for Open-Domain Dense Retrieval
Figure 2 for Multi-View Document Representation Learning for Open-Domain Dense Retrieval
Figure 3 for Multi-View Document Representation Learning for Open-Domain Dense Retrieval
Figure 4 for Multi-View Document Representation Learning for Open-Domain Dense Retrieval
Viaarxiv icon

ReACC: A Retrieval-Augmented Code Completion Framework

Add code
Bookmark button
Alert button
Mar 15, 2022
Shuai Lu, Nan Duan, Hojae Han, Daya Guo, Seung-won Hwang, Alexey Svyatkovskiy

Figure 1 for ReACC: A Retrieval-Augmented Code Completion Framework
Figure 2 for ReACC: A Retrieval-Augmented Code Completion Framework
Figure 3 for ReACC: A Retrieval-Augmented Code Completion Framework
Figure 4 for ReACC: A Retrieval-Augmented Code Completion Framework
Viaarxiv icon

LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval

Add code
Bookmark button
Alert button
Mar 11, 2022
Canwen Xu, Daya Guo, Nan Duan, Julian McAuley

Figure 1 for LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval
Figure 2 for LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval
Figure 3 for LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval
Figure 4 for LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval
Viaarxiv icon

UniXcoder: Unified Cross-Modal Pre-training for Code Representation

Add code
Bookmark button
Alert button
Mar 08, 2022
Daya Guo, Shuai Lu, Nan Duan, Yanlin Wang, Ming Zhou, Jian Yin

Figure 1 for UniXcoder: Unified Cross-Modal Pre-training for Code Representation
Figure 2 for UniXcoder: Unified Cross-Modal Pre-training for Code Representation
Figure 3 for UniXcoder: Unified Cross-Modal Pre-training for Code Representation
Figure 4 for UniXcoder: Unified Cross-Modal Pre-training for Code Representation
Viaarxiv icon

NÜWA-LIP: Language Guided Image Inpainting with Defect-free VQGAN

Add code
Bookmark button
Alert button
Feb 10, 2022
Minheng Ni, Chenfei Wu, Haoyang Huang, Daxin Jiang, Wangmeng Zuo, Nan Duan

Figure 1 for NÜWA-LIP: Language Guided Image Inpainting with Defect-free VQGAN
Figure 2 for NÜWA-LIP: Language Guided Image Inpainting with Defect-free VQGAN
Figure 3 for NÜWA-LIP: Language Guided Image Inpainting with Defect-free VQGAN
Figure 4 for NÜWA-LIP: Language Guided Image Inpainting with Defect-free VQGAN
Viaarxiv icon

CodeRetriever: Unimodal and Bimodal Contrastive Learning

Add code
Bookmark button
Alert button
Jan 26, 2022
Xiaonan Li, Yeyun Gong, Yelong Shen, Xipeng Qiu, Hang Zhang, Bolun Yao, Weizhen Qi, Daxin Jiang, Weizhu Chen, Nan Duan

Figure 1 for CodeRetriever: Unimodal and Bimodal Contrastive Learning
Figure 2 for CodeRetriever: Unimodal and Bimodal Contrastive Learning
Figure 3 for CodeRetriever: Unimodal and Bimodal Contrastive Learning
Figure 4 for CodeRetriever: Unimodal and Bimodal Contrastive Learning
Viaarxiv icon

Reasoning over Hybrid Chain for Table-and-Text Open Domain QA

Add code
Bookmark button
Alert button
Jan 15, 2022
Wanjun Zhong, Junjie Huang, Qian Liu, Ming Zhou, Jiahai Wang, Jian Yin, Nan Duan

Figure 1 for Reasoning over Hybrid Chain for Table-and-Text Open Domain QA
Figure 2 for Reasoning over Hybrid Chain for Table-and-Text Open Domain QA
Figure 3 for Reasoning over Hybrid Chain for Table-and-Text Open Domain QA
Figure 4 for Reasoning over Hybrid Chain for Table-and-Text Open Domain QA
Viaarxiv icon