Alert button
Picture for Qun Liu

Qun Liu

Alert button

End-to-end Training and Decoding for Pivot-based Cascaded Translation Model

Add code
Bookmark button
Alert button
May 03, 2023
Hao Cheng, Meng Zhang, Liangyou Li, Qun Liu, Zhihua Zhang

Figure 1 for End-to-end Training and Decoding for Pivot-based Cascaded Translation Model
Figure 2 for End-to-end Training and Decoding for Pivot-based Cascaded Translation Model
Figure 3 for End-to-end Training and Decoding for Pivot-based Cascaded Translation Model
Figure 4 for End-to-end Training and Decoding for Pivot-based Cascaded Translation Model
Viaarxiv icon

Learning Homographic Disambiguation Representation for Neural Machine Translation

Add code
Bookmark button
Alert button
Apr 13, 2023
Weixuan Wang, Wei Peng, Qun Liu

Figure 1 for Learning Homographic Disambiguation Representation for Neural Machine Translation
Figure 2 for Learning Homographic Disambiguation Representation for Neural Machine Translation
Figure 3 for Learning Homographic Disambiguation Representation for Neural Machine Translation
Figure 4 for Learning Homographic Disambiguation Representation for Neural Machine Translation
Viaarxiv icon

PanGu-Σ: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing

Add code
Bookmark button
Alert button
Mar 20, 2023
Xiaozhe Ren, Pingyi Zhou, Xinfan Meng, Xinjing Huang, Yadao Wang, Weichao Wang, Pengfei Li, Xiaoda Zhang, Alexander Podolskiy, Grigory Arshinov, Andrey Bout, Irina Piontkovskaya, Jiansheng Wei, Xin Jiang, Teng Su, Qun Liu, Jun Yao

Figure 1 for PanGu-Σ: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing
Figure 2 for PanGu-Σ: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing
Figure 3 for PanGu-Σ: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing
Figure 4 for PanGu-Σ: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing
Viaarxiv icon

Adapting Pre-trained Language Models for Quantum Natural Language Processing

Add code
Bookmark button
Alert button
Feb 24, 2023
Qiuchi Li, Benyou Wang, Yudong Zhu, Christina Lioma, Qun Liu

Figure 1 for Adapting Pre-trained Language Models for Quantum Natural Language Processing
Figure 2 for Adapting Pre-trained Language Models for Quantum Natural Language Processing
Figure 3 for Adapting Pre-trained Language Models for Quantum Natural Language Processing
Figure 4 for Adapting Pre-trained Language Models for Quantum Natural Language Processing
Viaarxiv icon

WL-Align: Weisfeiler-Lehman Relabeling for Aligning Users across Networks via Regularized Representation Learning

Add code
Bookmark button
Alert button
Dec 29, 2022
Li Liu, Penggang Chen, Xin Li, William K. Cheung, Youmin Zhang, Qun Liu, Guoyin Wang

Figure 1 for WL-Align: Weisfeiler-Lehman Relabeling for Aligning Users across Networks via Regularized Representation Learning
Figure 2 for WL-Align: Weisfeiler-Lehman Relabeling for Aligning Users across Networks via Regularized Representation Learning
Figure 3 for WL-Align: Weisfeiler-Lehman Relabeling for Aligning Users across Networks via Regularized Representation Learning
Figure 4 for WL-Align: Weisfeiler-Lehman Relabeling for Aligning Users across Networks via Regularized Representation Learning
Viaarxiv icon

MoralDial: A Framework to Train and Evaluate Moral Dialogue Systems via Constructing Moral Discussions

Add code
Bookmark button
Alert button
Dec 21, 2022
Hao Sun, Zhexin Zhang, Fei Mi, Yasheng Wang, Wei Liu, Jianwei Cui, Bin Wang, Qun Liu, Minlie Huang

Figure 1 for MoralDial: A Framework to Train and Evaluate Moral Dialogue Systems via Constructing Moral Discussions
Figure 2 for MoralDial: A Framework to Train and Evaluate Moral Dialogue Systems via Constructing Moral Discussions
Figure 3 for MoralDial: A Framework to Train and Evaluate Moral Dialogue Systems via Constructing Moral Discussions
Figure 4 for MoralDial: A Framework to Train and Evaluate Moral Dialogue Systems via Constructing Moral Discussions
Viaarxiv icon

Wukong-Reader: Multi-modal Pre-training for Fine-grained Visual Document Understanding

Add code
Bookmark button
Alert button
Dec 19, 2022
Haoli Bai, Zhiguang Liu, Xiaojun Meng, Wentao Li, Shuang Liu, Nian Xie, Rongfu Zheng, Liangwei Wang, Lu Hou, Jiansheng Wei, Xin Jiang, Qun Liu

Figure 1 for Wukong-Reader: Multi-modal Pre-training for Fine-grained Visual Document Understanding
Figure 2 for Wukong-Reader: Multi-modal Pre-training for Fine-grained Visual Document Understanding
Figure 3 for Wukong-Reader: Multi-modal Pre-training for Fine-grained Visual Document Understanding
Figure 4 for Wukong-Reader: Multi-modal Pre-training for Fine-grained Visual Document Understanding
Viaarxiv icon

AdaTranS: Adapting with Boundary-based Shrinking for End-to-End Speech Translation

Add code
Bookmark button
Alert button
Dec 17, 2022
Xingshan Zeng, Liangyou Li, Qun Liu

Figure 1 for AdaTranS: Adapting with Boundary-based Shrinking for End-to-End Speech Translation
Figure 2 for AdaTranS: Adapting with Boundary-based Shrinking for End-to-End Speech Translation
Figure 3 for AdaTranS: Adapting with Boundary-based Shrinking for End-to-End Speech Translation
Figure 4 for AdaTranS: Adapting with Boundary-based Shrinking for End-to-End Speech Translation
Viaarxiv icon

Retrieval-based Disentanglement with Distant Supervision

Add code
Bookmark button
Alert button
Dec 15, 2022
Jiawei Zhou, Xiaoguang Li, Lifeng Shang, Xin Jiang, Qun Liu, Lei Chen

Figure 1 for Retrieval-based Disentanglement with Distant Supervision
Figure 2 for Retrieval-based Disentanglement with Distant Supervision
Figure 3 for Retrieval-based Disentanglement with Distant Supervision
Figure 4 for Retrieval-based Disentanglement with Distant Supervision
Viaarxiv icon

G-MAP: General Memory-Augmented Pre-trained Language Model for Domain Tasks

Add code
Bookmark button
Alert button
Dec 08, 2022
Zhongwei Wan, Yichun Yin, Wei Zhang, Jiaxin Shi, Lifeng Shang, Guangyong Chen, Xin Jiang, Qun Liu

Figure 1 for G-MAP: General Memory-Augmented Pre-trained Language Model for Domain Tasks
Figure 2 for G-MAP: General Memory-Augmented Pre-trained Language Model for Domain Tasks
Figure 3 for G-MAP: General Memory-Augmented Pre-trained Language Model for Domain Tasks
Figure 4 for G-MAP: General Memory-Augmented Pre-trained Language Model for Domain Tasks
Viaarxiv icon