Alert button
Picture for Shiliang Zhang

Shiliang Zhang

Alert button

Loss Masking Is Not Needed in Decoder-only Transformer for Discrete-token Based ASR

Add code
Bookmark button
Alert button
Nov 08, 2023
Qian Chen, Wen Wang, Qinglin Zhang, Siqi Zheng, Shiliang Zhang, Chong Deng, Yukun Ma, Hai Yu, Jiaqing Liu, Chong Zhang

Figure 1 for Loss Masking Is Not Needed in Decoder-only Transformer for Discrete-token Based ASR
Figure 2 for Loss Masking Is Not Needed in Decoder-only Transformer for Discrete-token Based ASR
Figure 3 for Loss Masking Is Not Needed in Decoder-only Transformer for Discrete-token Based ASR
Figure 4 for Loss Masking Is Not Needed in Decoder-only Transformer for Discrete-token Based ASR
Viaarxiv icon

LauraGPT: Listen, Attend, Understand, and Regenerate Audio with GPT

Add code
Bookmark button
Alert button
Oct 11, 2023
Jiaming Wang, Zhihao Du, Qian Chen, Yunfei Chu, Zhifu Gao, Zerui Li, Kai Hu, Xiaohuan Zhou, Jin Xu, Ziyang Ma, Wen Wang, Siqi Zheng, Chang Zhou, Zhijie Yan, Shiliang Zhang

Figure 1 for LauraGPT: Listen, Attend, Understand, and Regenerate Audio with GPT
Figure 2 for LauraGPT: Listen, Attend, Understand, and Regenerate Audio with GPT
Figure 3 for LauraGPT: Listen, Attend, Understand, and Regenerate Audio with GPT
Figure 4 for LauraGPT: Listen, Attend, Understand, and Regenerate Audio with GPT
Viaarxiv icon

SA-Paraformer: Non-autoregressive End-to-End Speaker-Attributed ASR

Add code
Bookmark button
Alert button
Oct 07, 2023
Yangze Li, Fan Yu, Yuhao Liang, Pengcheng Guo, Mohan Shi, Zhihao Du, Shiliang Zhang, Lei Xie

Figure 1 for SA-Paraformer: Non-autoregressive End-to-End Speaker-Attributed ASR
Figure 2 for SA-Paraformer: Non-autoregressive End-to-End Speaker-Attributed ASR
Figure 3 for SA-Paraformer: Non-autoregressive End-to-End Speaker-Attributed ASR
Figure 4 for SA-Paraformer: Non-autoregressive End-to-End Speaker-Attributed ASR
Viaarxiv icon

Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs

Add code
Bookmark button
Alert button
Oct 01, 2023
Shiyu Xuan, Qingpei Guo, Ming Yang, Shiliang Zhang

Viaarxiv icon

Exploring RWKV for Memory Efficient and Low Latency Streaming ASR

Add code
Bookmark button
Alert button
Sep 26, 2023
Keyu An, Shiliang Zhang

Viaarxiv icon

The second multi-channel multi-party meeting transcription challenge (M2MeT) 2.0): A benchmark for speaker-attributed ASR

Add code
Bookmark button
Alert button
Sep 24, 2023
Yuhao Liang, Mohan Shi, Fan Yu, Yangze Li, Shiliang Zhang, Zhihao Du, Qian Chen, Lei Xie, Yanmin Qian, Jian Wu, Zhuo Chen, Kong Aik Lee, Zhijie Yan, Hui Bu

Figure 1 for The second multi-channel multi-party meeting transcription challenge (M2MeT) 2.0): A benchmark for speaker-attributed ASR
Figure 2 for The second multi-channel multi-party meeting transcription challenge (M2MeT) 2.0): A benchmark for speaker-attributed ASR
Figure 3 for The second multi-channel multi-party meeting transcription challenge (M2MeT) 2.0): A benchmark for speaker-attributed ASR
Figure 4 for The second multi-channel multi-party meeting transcription challenge (M2MeT) 2.0): A benchmark for speaker-attributed ASR
Viaarxiv icon

Improving Speaker Diarization using Semantic Information: Joint Pairwise Constraints Propagation

Add code
Bookmark button
Alert button
Sep 19, 2023
Luyao Cheng, Siqi Zheng, Qinglin Zhang, Hui Wang, Yafeng Chen, Qian Chen, Shiliang Zhang

Figure 1 for Improving Speaker Diarization using Semantic Information: Joint Pairwise Constraints Propagation
Figure 2 for Improving Speaker Diarization using Semantic Information: Joint Pairwise Constraints Propagation
Figure 3 for Improving Speaker Diarization using Semantic Information: Joint Pairwise Constraints Propagation
Figure 4 for Improving Speaker Diarization using Semantic Information: Joint Pairwise Constraints Propagation
Viaarxiv icon

Leveraging Speech PTM, Text LLM, and Emotional TTS for Speech Emotion Recognition

Add code
Bookmark button
Alert button
Sep 19, 2023
Ziyang Ma, Wen Wu, Zhisheng Zheng, Yiwei Guo, Qian Chen, Shiliang Zhang, Xie Chen

Figure 1 for Leveraging Speech PTM, Text LLM, and Emotional TTS for Speech Emotion Recognition
Figure 2 for Leveraging Speech PTM, Text LLM, and Emotional TTS for Speech Emotion Recognition
Figure 3 for Leveraging Speech PTM, Text LLM, and Emotional TTS for Speech Emotion Recognition
Figure 4 for Leveraging Speech PTM, Text LLM, and Emotional TTS for Speech Emotion Recognition
Viaarxiv icon

Incorporating Class-based Language Model for Named Entity Recognition in Factorized Neural Transducer

Add code
Bookmark button
Alert button
Sep 14, 2023
Peng Wang, Yifan Yang, Zheng Liang, Tian Tan, Shiliang Zhang, Xie Chen

Viaarxiv icon

FunCodec: A Fundamental, Reproducible and Integrable Open-source Toolkit for Neural Speech Codec

Add code
Bookmark button
Alert button
Sep 14, 2023
Zhihao Du, Shiliang Zhang, Kai Hu, Siqi Zheng

Figure 1 for FunCodec: A Fundamental, Reproducible and Integrable Open-source Toolkit for Neural Speech Codec
Figure 2 for FunCodec: A Fundamental, Reproducible and Integrable Open-source Toolkit for Neural Speech Codec
Figure 3 for FunCodec: A Fundamental, Reproducible and Integrable Open-source Toolkit for Neural Speech Codec
Figure 4 for FunCodec: A Fundamental, Reproducible and Integrable Open-source Toolkit for Neural Speech Codec
Viaarxiv icon