Alert button
Picture for Ming Zhou

Ming Zhou

Alert button

Inferential Text Generation with Multiple Knowledge Sources and Meta-Learning

Add code
Bookmark button
Alert button
Apr 07, 2020
Daya Guo, Akari Asai, Duyu Tang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, Jian Yin, Ming Zhou

Figure 1 for Inferential Text Generation with Multiple Knowledge Sources and Meta-Learning
Figure 2 for Inferential Text Generation with Multiple Knowledge Sources and Meta-Learning
Figure 3 for Inferential Text Generation with Multiple Knowledge Sources and Meta-Learning
Figure 4 for Inferential Text Generation with Multiple Knowledge Sources and Meta-Learning
Viaarxiv icon

At Which Level Should We Extract? An Empirical Study on Extractive Document Summarization

Add code
Bookmark button
Alert button
Apr 06, 2020
Qingyu Zhou, Furu Wei, Ming Zhou

Figure 1 for At Which Level Should We Extract? An Empirical Study on Extractive Document Summarization
Figure 2 for At Which Level Should We Extract? An Empirical Study on Extractive Document Summarization
Figure 3 for At Which Level Should We Extract? An Empirical Study on Extractive Document Summarization
Figure 4 for At Which Level Should We Extract? An Empirical Study on Extractive Document Summarization
Viaarxiv icon

Learning to Summarize Passages: Mining Passage-Summary Pairs from Wikipedia Revision Histories

Add code
Bookmark button
Alert button
Apr 06, 2020
Qingyu Zhou, Furu Wei, Ming Zhou

Figure 1 for Learning to Summarize Passages: Mining Passage-Summary Pairs from Wikipedia Revision Histories
Figure 2 for Learning to Summarize Passages: Mining Passage-Summary Pairs from Wikipedia Revision Histories
Figure 3 for Learning to Summarize Passages: Mining Passage-Summary Pairs from Wikipedia Revision Histories
Figure 4 for Learning to Summarize Passages: Mining Passage-Summary Pairs from Wikipedia Revision Histories
Viaarxiv icon

STEP: Sequence-to-Sequence Transformer Pre-training for Document Summarization

Add code
Bookmark button
Alert button
Apr 04, 2020
Yanyan Zou, Xingxing Zhang, Wei Lu, Furu Wei, Ming Zhou

Figure 1 for STEP: Sequence-to-Sequence Transformer Pre-training for Document Summarization
Figure 2 for STEP: Sequence-to-Sequence Transformer Pre-training for Document Summarization
Figure 3 for STEP: Sequence-to-Sequence Transformer Pre-training for Document Summarization
Figure 4 for STEP: Sequence-to-Sequence Transformer Pre-training for Document Summarization
Viaarxiv icon

XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation

Add code
Bookmark button
Alert button
Apr 03, 2020
Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Bruce Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Ming Zhou

Figure 1 for XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation
Figure 2 for XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation
Figure 3 for XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation
Figure 4 for XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation
Viaarxiv icon

XGPT: Cross-modal Generative Pre-Training for Image Captioning

Add code
Bookmark button
Alert button
Mar 04, 2020
Qiaolin Xia, Haoyang Huang, Nan Duan, Dongdong Zhang, Lei Ji, Zhifang Sui, Edward Cui, Taroon Bharti, Xin Liu, Ming Zhou

Figure 1 for XGPT: Cross-modal Generative Pre-Training for Image Captioning
Figure 2 for XGPT: Cross-modal Generative Pre-Training for Image Captioning
Figure 3 for XGPT: Cross-modal Generative Pre-Training for Image Captioning
Figure 4 for XGPT: Cross-modal Generative Pre-Training for Image Captioning
Viaarxiv icon

UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training

Add code
Bookmark button
Alert button
Feb 28, 2020
Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Songhao Piao, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon

Figure 1 for UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training
Figure 2 for UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training
Figure 3 for UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training
Figure 4 for UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training
Viaarxiv icon

MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers

Add code
Bookmark button
Alert button
Feb 25, 2020
Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, Ming Zhou

Figure 1 for MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
Figure 2 for MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
Figure 3 for MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
Figure 4 for MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
Viaarxiv icon

ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training

Add code
Bookmark button
Alert button
Feb 22, 2020
Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, Ming Zhou

Figure 1 for ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
Figure 2 for ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
Figure 3 for ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
Figure 4 for ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
Viaarxiv icon

CodeBERT: A Pre-Trained Model for Programming and Natural Languages

Add code
Bookmark button
Alert button
Feb 19, 2020
Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, Ming Zhou

Figure 1 for CodeBERT: A Pre-Trained Model for Programming and Natural Languages
Figure 2 for CodeBERT: A Pre-Trained Model for Programming and Natural Languages
Figure 3 for CodeBERT: A Pre-Trained Model for Programming and Natural Languages
Figure 4 for CodeBERT: A Pre-Trained Model for Programming and Natural Languages
Viaarxiv icon