Alert button
Picture for Qian Chen

Qian Chen

Alert button

Super-resolution imaging through a multimode fiber: the physical upsampling of speckle-driven

Add code
Bookmark button
Alert button
Jul 11, 2023
Chuncheng Zhang, Tingting Liu, Zhihua Xie, Yu Wang, Tong Liu, Qian Chen, Xiubao Sui

Figure 1 for Super-resolution imaging through a multimode fiber: the physical upsampling of speckle-driven
Figure 2 for Super-resolution imaging through a multimode fiber: the physical upsampling of speckle-driven
Figure 3 for Super-resolution imaging through a multimode fiber: the physical upsampling of speckle-driven
Figure 4 for Super-resolution imaging through a multimode fiber: the physical upsampling of speckle-driven
Viaarxiv icon

FedBone: Towards Large-Scale Federated Multi-Task Learning

Add code
Bookmark button
Alert button
Jun 30, 2023
Yiqiang Chen, Teng Zhang, Xinlong Jiang, Qian Chen, Chenlong Gao, Wuliang Huang

Figure 1 for FedBone: Towards Large-Scale Federated Multi-Task Learning
Figure 2 for FedBone: Towards Large-Scale Federated Multi-Task Learning
Figure 3 for FedBone: Towards Large-Scale Federated Multi-Task Learning
Figure 4 for FedBone: Towards Large-Scale Federated Multi-Task Learning
Viaarxiv icon

3D-Speaker: A Large-Scale Multi-Device, Multi-Distance, and Multi-Dialect Corpus for Speech Representation Disentanglement

Add code
Bookmark button
Alert button
Jun 28, 2023
Siqi Zheng, Luyao Cheng, Yafeng Chen, Hui Wang, Qian Chen

Figure 1 for 3D-Speaker: A Large-Scale Multi-Device, Multi-Distance, and Multi-Dialect Corpus for Speech Representation Disentanglement
Figure 2 for 3D-Speaker: A Large-Scale Multi-Device, Multi-Distance, and Multi-Dialect Corpus for Speech Representation Disentanglement
Figure 3 for 3D-Speaker: A Large-Scale Multi-Device, Multi-Distance, and Multi-Dialect Corpus for Speech Representation Disentanglement
Figure 4 for 3D-Speaker: A Large-Scale Multi-Device, Multi-Distance, and Multi-Dialect Corpus for Speech Representation Disentanglement
Viaarxiv icon

Exploiting Correlations Between Contexts and Definitions with Multiple Definition Modeling

Add code
Bookmark button
Alert button
May 24, 2023
Linhan Zhang, Qian Chen, Wen Wang, Yuxin Jiang, Bing Li, Wei Wang, Xin Cao

Figure 1 for Exploiting Correlations Between Contexts and Definitions with Multiple Definition Modeling
Figure 2 for Exploiting Correlations Between Contexts and Definitions with Multiple Definition Modeling
Figure 3 for Exploiting Correlations Between Contexts and Definitions with Multiple Definition Modeling
Figure 4 for Exploiting Correlations Between Contexts and Definitions with Multiple Definition Modeling
Viaarxiv icon

Enhancing Generation through Summarization Duality and Explicit Outline Control

Add code
Bookmark button
Alert button
May 23, 2023
Yunzhe Li, Qian Chen, Weixiang Yan, Wen Wang, Qinglin Zhang, Hari Sundaram

Figure 1 for Enhancing Generation through Summarization Duality and Explicit Outline Control
Figure 2 for Enhancing Generation through Summarization Duality and Explicit Outline Control
Figure 3 for Enhancing Generation through Summarization Duality and Explicit Outline Control
Figure 4 for Enhancing Generation through Summarization Duality and Explicit Outline Control
Viaarxiv icon

BA-SOT: Boundary-Aware Serialized Output Training for Multi-Talker ASR

Add code
Bookmark button
Alert button
May 23, 2023
Yuhao Liang, Fan Yu, Yangze Li, Pengcheng Guo, Shiliang Zhang, Qian Chen, Lei Xie

Figure 1 for BA-SOT: Boundary-Aware Serialized Output Training for Multi-Talker ASR
Figure 2 for BA-SOT: Boundary-Aware Serialized Output Training for Multi-Talker ASR
Figure 3 for BA-SOT: Boundary-Aware Serialized Output Training for Multi-Talker ASR
Figure 4 for BA-SOT: Boundary-Aware Serialized Output Training for Multi-Talker ASR
Viaarxiv icon

Exploring Speaker-Related Information in Spoken Language Understanding for Better Speaker Diarization

Add code
Bookmark button
Alert button
May 22, 2023
Luyao Cheng, Siqi Zheng, Zhang Qinglin, Hui Wang, Yafeng Chen, Qian Chen

Figure 1 for Exploring Speaker-Related Information in Spoken Language Understanding for Better Speaker Diarization
Figure 2 for Exploring Speaker-Related Information in Spoken Language Understanding for Better Speaker Diarization
Figure 3 for Exploring Speaker-Related Information in Spoken Language Understanding for Better Speaker Diarization
Figure 4 for Exploring Speaker-Related Information in Spoken Language Understanding for Better Speaker Diarization
Viaarxiv icon

An Enhanced Res2Net with Local and Global Feature Fusion for Speaker Verification

Add code
Bookmark button
Alert button
May 22, 2023
Yafeng Chen, Siqi Zheng, Hui Wang, Luyao Cheng, Qian Chen, Jiajun Qi

Figure 1 for An Enhanced Res2Net with Local and Global Feature Fusion for Speaker Verification
Figure 2 for An Enhanced Res2Net with Local and Global Feature Fusion for Speaker Verification
Figure 3 for An Enhanced Res2Net with Local and Global Feature Fusion for Speaker Verification
Figure 4 for An Enhanced Res2Net with Local and Global Feature Fusion for Speaker Verification
Viaarxiv icon

CASA-ASR: Context-Aware Speaker-Attributed ASR

Add code
Bookmark button
Alert button
May 21, 2023
Mohan Shi, Zhihao Du, Qian Chen, Fan Yu, Yangze Li, Shiliang Zhang, Jie Zhang, Li-Rong Dai

Figure 1 for CASA-ASR: Context-Aware Speaker-Attributed ASR
Figure 2 for CASA-ASR: Context-Aware Speaker-Attributed ASR
Figure 3 for CASA-ASR: Context-Aware Speaker-Attributed ASR
Figure 4 for CASA-ASR: Context-Aware Speaker-Attributed ASR
Viaarxiv icon

Semantic VAD: Low-Latency Voice Activity Detection for Speech Interaction

Add code
Bookmark button
Alert button
May 21, 2023
Mohan Shi, Yuchun Shu, Lingyun Zuo, Qian Chen, Shiliang Zhang, Jie Zhang, Li-Rong Dai

Figure 1 for Semantic VAD: Low-Latency Voice Activity Detection for Speech Interaction
Figure 2 for Semantic VAD: Low-Latency Voice Activity Detection for Speech Interaction
Figure 3 for Semantic VAD: Low-Latency Voice Activity Detection for Speech Interaction
Figure 4 for Semantic VAD: Low-Latency Voice Activity Detection for Speech Interaction
Viaarxiv icon