Alert button
Picture for Dong Yu

Dong Yu

Alert button

C-MORE: Pretraining to Answer Open-Domain Questions by Consulting Millions of References

Add code
Bookmark button
Alert button
Mar 22, 2022
Xiang Yue, Xiaoman Pan, Wenlin Yao, Dian Yu, Dong Yu, Jianshu Chen

Figure 1 for C-MORE: Pretraining to Answer Open-Domain Questions by Consulting Millions of References
Figure 2 for C-MORE: Pretraining to Answer Open-Domain Questions by Consulting Millions of References
Figure 3 for C-MORE: Pretraining to Answer Open-Domain Questions by Consulting Millions of References
Figure 4 for C-MORE: Pretraining to Answer Open-Domain Questions by Consulting Millions of References
Viaarxiv icon

Towards Abstractive Grounded Summarization of Podcast Transcripts

Add code
Bookmark button
Alert button
Mar 22, 2022
Kaiqiang Song, Chen Li, Xiaoyang Wang, Dong Yu, Fei Liu

Figure 1 for Towards Abstractive Grounded Summarization of Podcast Transcripts
Figure 2 for Towards Abstractive Grounded Summarization of Podcast Transcripts
Figure 3 for Towards Abstractive Grounded Summarization of Podcast Transcripts
Figure 4 for Towards Abstractive Grounded Summarization of Podcast Transcripts
Viaarxiv icon

Learning-by-Narrating: Narrative Pre-Training for Zero-Shot Dialogue Comprehension

Add code
Bookmark button
Alert button
Mar 19, 2022
Chao Zhao, Wenlin Yao, Dian Yu, Kaiqiang Song, Dong Yu, Jianshu Chen

Figure 1 for Learning-by-Narrating: Narrative Pre-Training for Zero-Shot Dialogue Comprehension
Figure 2 for Learning-by-Narrating: Narrative Pre-Training for Zero-Shot Dialogue Comprehension
Figure 3 for Learning-by-Narrating: Narrative Pre-Training for Zero-Shot Dialogue Comprehension
Figure 4 for Learning-by-Narrating: Narrative Pre-Training for Zero-Shot Dialogue Comprehension
Viaarxiv icon

Full RGB Just Noticeable Difference (JND) Modelling

Add code
Bookmark button
Alert button
Mar 01, 2022
Jian Jin, Dong Yu, Weisi Lin, Lili Meng, Hao Wang, Huaxiang Zhang

Figure 1 for Full RGB Just Noticeable Difference (JND) Modelling
Figure 2 for Full RGB Just Noticeable Difference (JND) Modelling
Figure 3 for Full RGB Just Noticeable Difference (JND) Modelling
Figure 4 for Full RGB Just Noticeable Difference (JND) Modelling
Viaarxiv icon

VCVTS: Multi-speaker Video-to-Speech synthesis via cross-modal knowledge transfer from voice conversion

Add code
Bookmark button
Alert button
Feb 18, 2022
Disong Wang, Shan Yang, Dan Su, Xunying Liu, Dong Yu, Helen Meng

Figure 1 for VCVTS: Multi-speaker Video-to-Speech synthesis via cross-modal knowledge transfer from voice conversion
Figure 2 for VCVTS: Multi-speaker Video-to-Speech synthesis via cross-modal knowledge transfer from voice conversion
Figure 3 for VCVTS: Multi-speaker Video-to-Speech synthesis via cross-modal knowledge transfer from voice conversion
Figure 4 for VCVTS: Multi-speaker Video-to-Speech synthesis via cross-modal knowledge transfer from voice conversion
Viaarxiv icon

FlowEval: A Consensus-Based Dialogue Evaluation Framework Using Segment Act Flows

Add code
Bookmark button
Alert button
Feb 14, 2022
Jianqiao Zhao, Yanyang Li, Wanyu Du, Yangfeng Ji, Dong Yu, Michael R. Lyu, Liwei Wang

Figure 1 for FlowEval: A Consensus-Based Dialogue Evaluation Framework Using Segment Act Flows
Figure 2 for FlowEval: A Consensus-Based Dialogue Evaluation Framework Using Segment Act Flows
Figure 3 for FlowEval: A Consensus-Based Dialogue Evaluation Framework Using Segment Act Flows
Figure 4 for FlowEval: A Consensus-Based Dialogue Evaluation Framework Using Segment Act Flows
Viaarxiv icon

DiffGAN-TTS: High-Fidelity and Efficient Text-to-Speech with Denoising Diffusion GANs

Add code
Bookmark button
Alert button
Jan 28, 2022
Songxiang Liu, Dan Su, Dong Yu

Figure 1 for DiffGAN-TTS: High-Fidelity and Efficient Text-to-Speech with Denoising Diffusion GANs
Figure 2 for DiffGAN-TTS: High-Fidelity and Efficient Text-to-Speech with Denoising Diffusion GANs
Figure 3 for DiffGAN-TTS: High-Fidelity and Efficient Text-to-Speech with Denoising Diffusion GANs
Figure 4 for DiffGAN-TTS: High-Fidelity and Efficient Text-to-Speech with Denoising Diffusion GANs
Viaarxiv icon

Improving Mandarin End-to-End Speech Recognition with Word N-gram Language Model

Add code
Bookmark button
Alert button
Jan 06, 2022
Jinchuan Tian, Jianwei Yu, Chao Weng, Yuexian Zou, Dong Yu

Figure 1 for Improving Mandarin End-to-End Speech Recognition with Word N-gram Language Model
Figure 2 for Improving Mandarin End-to-End Speech Recognition with Word N-gram Language Model
Figure 3 for Improving Mandarin End-to-End Speech Recognition with Word N-gram Language Model
Figure 4 for Improving Mandarin End-to-End Speech Recognition with Word N-gram Language Model
Viaarxiv icon

Consistent Training and Decoding For End-to-end Speech Recognition Using Lattice-free MMI

Add code
Bookmark button
Alert button
Dec 30, 2021
Jinchuan Tian, Jianwei Yu, Chao Weng, Shi-Xiong Zhang, Dan Su, Dong Yu, Yuexian Zou

Figure 1 for Consistent Training and Decoding For End-to-end Speech Recognition Using Lattice-free MMI
Figure 2 for Consistent Training and Decoding For End-to-end Speech Recognition Using Lattice-free MMI
Figure 3 for Consistent Training and Decoding For End-to-end Speech Recognition Using Lattice-free MMI
Figure 4 for Consistent Training and Decoding For End-to-end Speech Recognition Using Lattice-free MMI
Viaarxiv icon

Joint Modeling of Code-Switched and Monolingual ASR via Conditional Factorization

Add code
Bookmark button
Alert button
Nov 29, 2021
Brian Yan, Chunlei Zhang, Meng Yu, Shi-Xiong Zhang, Siddharth Dalmia, Dan Berrebbi, Chao Weng, Shinji Watanabe, Dong Yu

Figure 1 for Joint Modeling of Code-Switched and Monolingual ASR via Conditional Factorization
Figure 2 for Joint Modeling of Code-Switched and Monolingual ASR via Conditional Factorization
Figure 3 for Joint Modeling of Code-Switched and Monolingual ASR via Conditional Factorization
Figure 4 for Joint Modeling of Code-Switched and Monolingual ASR via Conditional Factorization
Viaarxiv icon