Alert button
Picture for Zhuo Chen

Zhuo Chen

Alert button

Why does Self-Supervised Learning for Speech Recognition Benefit Speaker Recognition?

Add code
Bookmark button
Alert button
Apr 27, 2022
Sanyuan Chen, Yu Wu, Chengyi Wang, Shujie Liu, Zhuo Chen, Peidong Wang, Gang Liu, Jinyu Li, Jian Wu, Xiangzhan Yu, Furu Wei

Figure 1 for Why does Self-Supervised Learning for Speech Recognition Benefit Speaker Recognition?
Figure 2 for Why does Self-Supervised Learning for Speech Recognition Benefit Speaker Recognition?
Figure 3 for Why does Self-Supervised Learning for Speech Recognition Benefit Speaker Recognition?
Figure 4 for Why does Self-Supervised Learning for Speech Recognition Benefit Speaker Recognition?
Viaarxiv icon

Streaming Speaker-Attributed ASR with Token-Level Speaker Embeddings

Add code
Bookmark button
Alert button
Mar 30, 2022
Naoyuki Kanda, Jian Wu, Yu Wu, Xiong Xiao, Zhong Meng, Xiaofei Wang, Yashesh Gaur, Zhuo Chen, Jinyu Li, Takuya Yoshioka

Figure 1 for Streaming Speaker-Attributed ASR with Token-Level Speaker Embeddings
Figure 2 for Streaming Speaker-Attributed ASR with Token-Level Speaker Embeddings
Figure 3 for Streaming Speaker-Attributed ASR with Token-Level Speaker Embeddings
Figure 4 for Streaming Speaker-Attributed ASR with Token-Level Speaker Embeddings
Viaarxiv icon

Knowledge-informed Molecular Learning: A Survey on Paradigm Transfer

Add code
Bookmark button
Alert button
Feb 17, 2022
Yin Fang, Qiang Zhang, Zhuo Chen, Xiaohui Fan, Huajun Chen

Figure 1 for Knowledge-informed Molecular Learning: A Survey on Paradigm Transfer
Figure 2 for Knowledge-informed Molecular Learning: A Survey on Paradigm Transfer
Viaarxiv icon

Streaming Multi-Talker ASR with Token-Level Serialized Output Training

Add code
Bookmark button
Alert button
Feb 05, 2022
Naoyuki Kanda, Jian Wu, Yu Wu, Xiong Xiao, Zhong Meng, Xiaofei Wang, Yashesh Gaur, Zhuo Chen, Jinyu Li, Takuya Yoshioka

Figure 1 for Streaming Multi-Talker ASR with Token-Level Serialized Output Training
Figure 2 for Streaming Multi-Talker ASR with Token-Level Serialized Output Training
Figure 3 for Streaming Multi-Talker ASR with Token-Level Serialized Output Training
Figure 4 for Streaming Multi-Talker ASR with Token-Level Serialized Output Training
Viaarxiv icon

Low-resource Learning with Knowledge Graphs: A Comprehensive Survey

Add code
Bookmark button
Alert button
Dec 28, 2021
Jiaoyan Chen, Yuxia Geng, Zhuo Chen, Jeff Z. Pan, Yuan He, Wen Zhang, Ian Horrocks, Huajun Chen

Figure 1 for Low-resource Learning with Knowledge Graphs: A Comprehensive Survey
Figure 2 for Low-resource Learning with Knowledge Graphs: A Comprehensive Survey
Figure 3 for Low-resource Learning with Knowledge Graphs: A Comprehensive Survey
Figure 4 for Low-resource Learning with Knowledge Graphs: A Comprehensive Survey
Viaarxiv icon

A New Image Codec Paradigm for Human and Machine Uses

Add code
Bookmark button
Alert button
Dec 19, 2021
Sien Chen, Jian Jin, Lili Meng, Weisi Lin, Zhuo Chen, Tsui-Shan Chang, Zhengguang Li, Huaxiang Zhang

Figure 1 for A New Image Codec Paradigm for Human and Machine Uses
Figure 2 for A New Image Codec Paradigm for Human and Machine Uses
Figure 3 for A New Image Codec Paradigm for Human and Machine Uses
Figure 4 for A New Image Codec Paradigm for Human and Machine Uses
Viaarxiv icon

Molecular Contrastive Learning with Chemical Element Knowledge Graph

Add code
Bookmark button
Alert button
Dec 01, 2021
Yin Fang, Qiang Zhang, Haihong Yang, Xiang Zhuang, Shumin Deng, Wen Zhang, Ming Qin, Zhuo Chen, Xiaohui Fan, Huajun Chen

Figure 1 for Molecular Contrastive Learning with Chemical Element Knowledge Graph
Figure 2 for Molecular Contrastive Learning with Chemical Element Knowledge Graph
Figure 3 for Molecular Contrastive Learning with Chemical Element Knowledge Graph
Figure 4 for Molecular Contrastive Learning with Chemical Element Knowledge Graph
Viaarxiv icon

Separating Long-Form Speech with Group-Wise Permutation Invariant Training

Add code
Bookmark button
Alert button
Nov 17, 2021
Wangyou Zhang, Zhuo Chen, Naoyuki Kanda, Shujie Liu, Jinyu Li, Sefik Emre Eskimez, Takuya Yoshioka, Xiong Xiao, Zhong Meng, Yanmin Qian, Furu Wei

Figure 1 for Separating Long-Form Speech with Group-Wise Permutation Invariant Training
Figure 2 for Separating Long-Form Speech with Group-Wise Permutation Invariant Training
Figure 3 for Separating Long-Form Speech with Group-Wise Permutation Invariant Training
Figure 4 for Separating Long-Form Speech with Group-Wise Permutation Invariant Training
Viaarxiv icon

WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing

Add code
Bookmark button
Alert button
Oct 29, 2021
Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei

Figure 1 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 2 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 3 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 4 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Viaarxiv icon