Alert button
Picture for Naoyuki Kanda

Naoyuki Kanda

Alert button

Simulating realistic speech overlaps improves multi-talker ASR

Add code
Bookmark button
Alert button
Nov 17, 2022
Muqiao Yang, Naoyuki Kanda, Xiaofei Wang, Jian Wu, Sunit Sivasankaran, Zhuo Chen, Jinyu Li, Takuya Yoshioka

Figure 1 for Simulating realistic speech overlaps improves multi-talker ASR
Figure 2 for Simulating realistic speech overlaps improves multi-talker ASR
Figure 3 for Simulating realistic speech overlaps improves multi-talker ASR
Figure 4 for Simulating realistic speech overlaps improves multi-talker ASR
Viaarxiv icon

Breaking trade-offs in speech separation with sparsely-gated mixture of experts

Add code
Bookmark button
Alert button
Nov 11, 2022
Xiaofei Wang, Zhuo Chen, Yu Shi, Jian Wu, Naoyuki Kanda, Takuya Yoshioka

Figure 1 for Breaking trade-offs in speech separation with sparsely-gated mixture of experts
Figure 2 for Breaking trade-offs in speech separation with sparsely-gated mixture of experts
Figure 3 for Breaking trade-offs in speech separation with sparsely-gated mixture of experts
Viaarxiv icon

Self-supervised learning with bi-label masked speech prediction for streaming multi-talker speech recognition

Add code
Bookmark button
Alert button
Nov 10, 2022
Zili Huang, Zhuo Chen, Naoyuki Kanda, Jian Wu, Yiming Wang, Jinyu Li, Takuya Yoshioka, Xiaofei Wang, Peidong Wang

Figure 1 for Self-supervised learning with bi-label masked speech prediction for streaming multi-talker speech recognition
Figure 2 for Self-supervised learning with bi-label masked speech prediction for streaming multi-talker speech recognition
Figure 3 for Self-supervised learning with bi-label masked speech prediction for streaming multi-talker speech recognition
Figure 4 for Self-supervised learning with bi-label masked speech prediction for streaming multi-talker speech recognition
Viaarxiv icon

Speech separation with large-scale self-supervised learning

Add code
Bookmark button
Alert button
Nov 09, 2022
Zhuo Chen, Naoyuki Kanda, Jian Wu, Yu Wu, Xiaofei Wang, Takuya Yoshioka, Jinyu Li, Sunit Sivasankaran, Sefik Emre Eskimez

Figure 1 for Speech separation with large-scale self-supervised learning
Figure 2 for Speech separation with large-scale self-supervised learning
Figure 3 for Speech separation with large-scale self-supervised learning
Figure 4 for Speech separation with large-scale self-supervised learning
Viaarxiv icon

VarArray Meets t-SOT: Advancing the State of the Art of Streaming Distant Conversational Speech Recognition

Add code
Bookmark button
Alert button
Sep 12, 2022
Naoyuki Kanda, Jian Wu, Xiaofei Wang, Zhuo Chen, Jinyu Li, Takuya Yoshioka

Figure 1 for VarArray Meets t-SOT: Advancing the State of the Art of Streaming Distant Conversational Speech Recognition
Figure 2 for VarArray Meets t-SOT: Advancing the State of the Art of Streaming Distant Conversational Speech Recognition
Figure 3 for VarArray Meets t-SOT: Advancing the State of the Art of Streaming Distant Conversational Speech Recognition
Figure 4 for VarArray Meets t-SOT: Advancing the State of the Art of Streaming Distant Conversational Speech Recognition
Viaarxiv icon

Target Speaker Voice Activity Detection with Transformers and Its Integration with End-to-End Neural Diarization

Add code
Bookmark button
Alert button
Aug 27, 2022
Dongmei Wang, Xiong Xiao, Naoyuki Kanda, Takuya Yoshioka, Jian Wu

Figure 1 for Target Speaker Voice Activity Detection with Transformers and Its Integration with End-to-End Neural Diarization
Figure 2 for Target Speaker Voice Activity Detection with Transformers and Its Integration with End-to-End Neural Diarization
Figure 3 for Target Speaker Voice Activity Detection with Transformers and Its Integration with End-to-End Neural Diarization
Figure 4 for Target Speaker Voice Activity Detection with Transformers and Its Integration with End-to-End Neural Diarization
Viaarxiv icon

i-Code: An Integrative and Composable Multimodal Learning Framework

Add code
Bookmark button
Alert button
May 05, 2022
Ziyi Yang, Yuwei Fang, Chenguang Zhu, Reid Pryzant, Dongdong Chen, Yu Shi, Yichong Xu, Yao Qian, Mei Gao, Yi-Ling Chen, Liyang Lu, Yujia Xie, Robert Gmyr, Noel Codella, Naoyuki Kanda, Bin Xiao, Lu Yuan, Takuya Yoshioka, Michael Zeng, Xuedong Huang

Figure 1 for i-Code: An Integrative and Composable Multimodal Learning Framework
Figure 2 for i-Code: An Integrative and Composable Multimodal Learning Framework
Figure 3 for i-Code: An Integrative and Composable Multimodal Learning Framework
Figure 4 for i-Code: An Integrative and Composable Multimodal Learning Framework
Viaarxiv icon

Leveraging Real Conversational Data for Multi-Channel Continuous Speech Separation

Add code
Bookmark button
Alert button
Apr 07, 2022
Xiaofei Wang, Dongmei Wang, Naoyuki Kanda, Sefik Emre Eskimez, Takuya Yoshioka

Figure 1 for Leveraging Real Conversational Data for Multi-Channel Continuous Speech Separation
Figure 2 for Leveraging Real Conversational Data for Multi-Channel Continuous Speech Separation
Figure 3 for Leveraging Real Conversational Data for Multi-Channel Continuous Speech Separation
Figure 4 for Leveraging Real Conversational Data for Multi-Channel Continuous Speech Separation
Viaarxiv icon

Streaming Speaker-Attributed ASR with Token-Level Speaker Embeddings

Add code
Bookmark button
Alert button
Mar 30, 2022
Naoyuki Kanda, Jian Wu, Yu Wu, Xiong Xiao, Zhong Meng, Xiaofei Wang, Yashesh Gaur, Zhuo Chen, Jinyu Li, Takuya Yoshioka

Figure 1 for Streaming Speaker-Attributed ASR with Token-Level Speaker Embeddings
Figure 2 for Streaming Speaker-Attributed ASR with Token-Level Speaker Embeddings
Figure 3 for Streaming Speaker-Attributed ASR with Token-Level Speaker Embeddings
Figure 4 for Streaming Speaker-Attributed ASR with Token-Level Speaker Embeddings
Viaarxiv icon