Alert button
Picture for Qun Liu

Qun Liu

Alert button

PERT: A New Solution to Pinyin to Character Conversion Task

Add code
Bookmark button
Alert button
May 24, 2022
Jinghui Xiao, Qun Liu, Xin Jiang, Yuanfeng Xiong, Haiteng Wu, Zhe Zhang

Figure 1 for PERT: A New Solution to Pinyin to Character Conversion Task
Figure 2 for PERT: A New Solution to Pinyin to Character Conversion Task
Figure 3 for PERT: A New Solution to Pinyin to Character Conversion Task
Figure 4 for PERT: A New Solution to Pinyin to Character Conversion Task
Viaarxiv icon

Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding

Add code
Bookmark button
Alert button
May 21, 2022
Abbas Ghaddar, Yimeng Wu, Sunyam Bagga, Ahmad Rashid, Khalil Bibi, Mehdi Rezagholizadeh, Chao Xing, Yasheng Wang, Duan Xinyu, Zhefeng Wang, Baoxing Huai, Xin Jiang, Qun Liu, Philippe Langlais

Figure 1 for Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Figure 2 for Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Figure 3 for Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Figure 4 for Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Viaarxiv icon

Exploring Extreme Parameter Compression for Pre-trained Language Models

Add code
Bookmark button
Alert button
May 20, 2022
Yuxin Ren, Benyou Wang, Lifeng Shang, Xin Jiang, Qun Liu

Figure 1 for Exploring Extreme Parameter Compression for Pre-trained Language Models
Figure 2 for Exploring Extreme Parameter Compression for Pre-trained Language Models
Figure 3 for Exploring Extreme Parameter Compression for Pre-trained Language Models
Figure 4 for Exploring Extreme Parameter Compression for Pre-trained Language Models
Viaarxiv icon

UTC: A Unified Transformer with Inter-Task Contrastive Learning for Visual Dialog

Add code
Bookmark button
Alert button
May 03, 2022
Cheng Chen, Yudong Zhu, Zhenshan Tan, Qingrong Cheng, Xin Jiang, Qun Liu, Xiaodong Gu

Figure 1 for UTC: A Unified Transformer with Inter-Task Contrastive Learning for Visual Dialog
Figure 2 for UTC: A Unified Transformer with Inter-Task Contrastive Learning for Visual Dialog
Figure 3 for UTC: A Unified Transformer with Inter-Task Contrastive Learning for Visual Dialog
Figure 4 for UTC: A Unified Transformer with Inter-Task Contrastive Learning for Visual Dialog
Viaarxiv icon

Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering

Add code
Bookmark button
Alert button
Apr 12, 2022
Jiawei Zhou, Xiaoguang Li, Lifeng Shang, Lan Luo, Ke Zhan, Enrui Hu, Xinyu Zhang, Hao Jiang, Zhao Cao, Fan Yu, Xin Jiang, Qun Liu, Lei Chen

Figure 1 for Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering
Figure 2 for Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering
Figure 3 for Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering
Figure 4 for Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering
Viaarxiv icon

PanGu-Bot: Efficient Generative Dialogue Pre-training from Pre-trained Language Model

Add code
Bookmark button
Alert button
Apr 07, 2022
Fei Mi, Yitong Li, Yulong Zeng, Jingyan Zhou, Yasheng Wang, Chuanfei Xu, Lifeng Shang, Xin Jiang, Shiqi Zhao, Qun Liu

Figure 1 for PanGu-Bot: Efficient Generative Dialogue Pre-training from Pre-trained Language Model
Figure 2 for PanGu-Bot: Efficient Generative Dialogue Pre-training from Pre-trained Language Model
Figure 3 for PanGu-Bot: Efficient Generative Dialogue Pre-training from Pre-trained Language Model
Figure 4 for PanGu-Bot: Efficient Generative Dialogue Pre-training from Pre-trained Language Model
Viaarxiv icon

How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis

Add code
Bookmark button
Alert button
Mar 31, 2022
Shaobo Li, Xiaoguang Li, Lifeng Shang, Zhenhua Dong, Chengjie Sun, Bingquan Liu, Zhenzhou Ji, Xin Jiang, Qun Liu

Figure 1 for How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis
Figure 2 for How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis
Figure 3 for How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis
Figure 4 for How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis
Viaarxiv icon

Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation

Add code
Bookmark button
Alert button
Mar 30, 2022
Wenliang Dai, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, Pascale Fung

Figure 1 for Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation
Figure 2 for Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation
Figure 3 for Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation
Figure 4 for Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation
Viaarxiv icon

Compression of Generative Pre-trained Language Models via Quantization

Add code
Bookmark button
Alert button
Mar 21, 2022
Chaofan Tao, Lu Hou, Wei Zhang, Lifeng Shang, Xin Jiang, Qun Liu, Ping Luo, Ngai Wong

Figure 1 for Compression of Generative Pre-trained Language Models via Quantization
Figure 2 for Compression of Generative Pre-trained Language Models via Quantization
Figure 3 for Compression of Generative Pre-trained Language Models via Quantization
Figure 4 for Compression of Generative Pre-trained Language Models via Quantization
Viaarxiv icon

Universal Conditional Masked Language Pre-training for Neural Machine Translation

Add code
Bookmark button
Alert button
Mar 20, 2022
Pengfei Li, Liangyou Li, Meng Zhang, Minghao Wu, Qun Liu

Figure 1 for Universal Conditional Masked Language Pre-training for Neural Machine Translation
Figure 2 for Universal Conditional Masked Language Pre-training for Neural Machine Translation
Figure 3 for Universal Conditional Masked Language Pre-training for Neural Machine Translation
Figure 4 for Universal Conditional Masked Language Pre-training for Neural Machine Translation
Viaarxiv icon