Alert button
Picture for Takaaki Hori

Takaaki Hori

Alert button

Multi-Stream End-to-End Speech Recognition

Add code
Bookmark button
Alert button
Jun 17, 2019
Ruizhi Li, Xiaofei Wang, Sri Harish Mallidi, Shinji Watanabe, Takaaki Hori, Hynek Hermansky

Figure 1 for Multi-Stream End-to-End Speech Recognition
Figure 2 for Multi-Stream End-to-End Speech Recognition
Figure 3 for Multi-Stream End-to-End Speech Recognition
Figure 4 for Multi-Stream End-to-End Speech Recognition
Viaarxiv icon

Self-supervised Sequence-to-sequence ASR using Unpaired Speech and Text

Add code
Bookmark button
Alert button
Apr 30, 2019
Murali Karthick Baskar, Shinji Watanabe, Ramon Astudillo, Takaaki Hori, Lukáš Burget, Jan Černocký

Figure 1 for Self-supervised Sequence-to-sequence ASR using Unpaired Speech and Text
Figure 2 for Self-supervised Sequence-to-sequence ASR using Unpaired Speech and Text
Figure 3 for Self-supervised Sequence-to-sequence ASR using Unpaired Speech and Text
Figure 4 for Self-supervised Sequence-to-sequence ASR using Unpaired Speech and Text
Viaarxiv icon

Stream attention-based multi-array end-to-end speech recognition

Add code
Bookmark button
Alert button
Nov 12, 2018
Xiaofei Wang, Ruizhi Li, Sri Harish Mallid, Takaaki Hori, Shinji Watanabe, Hynek Hermansky

Figure 1 for Stream attention-based multi-array end-to-end speech recognition
Figure 2 for Stream attention-based multi-array end-to-end speech recognition
Figure 3 for Stream attention-based multi-array end-to-end speech recognition
Figure 4 for Stream attention-based multi-array end-to-end speech recognition
Viaarxiv icon

Multi-encoder multi-resolution framework for end-to-end speech recognition

Add code
Bookmark button
Alert button
Nov 12, 2018
Ruizhi Li, Xiaofei Wang, Sri Harish Mallidi, Takaaki Hori, Shinji Watanabe, Hynek Hermansky

Figure 1 for Multi-encoder multi-resolution framework for end-to-end speech recognition
Figure 2 for Multi-encoder multi-resolution framework for end-to-end speech recognition
Figure 3 for Multi-encoder multi-resolution framework for end-to-end speech recognition
Figure 4 for Multi-encoder multi-resolution framework for end-to-end speech recognition
Viaarxiv icon

Vectorization of hypotheses and speech for faster beam search in encoder decoder-based speech recognition

Add code
Bookmark button
Alert button
Nov 12, 2018
Hiroshi Seki, Takaaki Hori, Shinji Watanabe

Figure 1 for Vectorization of hypotheses and speech for faster beam search in encoder decoder-based speech recognition
Figure 2 for Vectorization of hypotheses and speech for faster beam search in encoder decoder-based speech recognition
Figure 3 for Vectorization of hypotheses and speech for faster beam search in encoder decoder-based speech recognition
Figure 4 for Vectorization of hypotheses and speech for faster beam search in encoder decoder-based speech recognition
Viaarxiv icon

Analysis of Multilingual Sequence-to-Sequence speech recognition systems

Add code
Bookmark button
Alert button
Nov 07, 2018
Martin Karafiát, Murali Karthick Baskar, Shinji Watanabe, Takaaki Hori, Matthew Wiesner, Jan "Honza'' Černocký

Figure 1 for Analysis of Multilingual Sequence-to-Sequence speech recognition systems
Figure 2 for Analysis of Multilingual Sequence-to-Sequence speech recognition systems
Figure 3 for Analysis of Multilingual Sequence-to-Sequence speech recognition systems
Figure 4 for Analysis of Multilingual Sequence-to-Sequence speech recognition systems
Viaarxiv icon

Promising Accurate Prefix Boosting for sequence-to-sequence ASR

Add code
Bookmark button
Alert button
Nov 07, 2018
Murali Karthick Baskar, Lukáš Burget, Shinji Watanabe, Martin Karafiát, Takaaki Hori, Jan Honza Černocký

Figure 1 for Promising Accurate Prefix Boosting for sequence-to-sequence ASR
Figure 2 for Promising Accurate Prefix Boosting for sequence-to-sequence ASR
Figure 3 for Promising Accurate Prefix Boosting for sequence-to-sequence ASR
Viaarxiv icon

CNN-based MultiChannel End-to-End Speech Recognition for everyday home environments

Add code
Bookmark button
Alert button
Nov 07, 2018
Nelson Yalta, Shinji Watanabe, Takaaki Hori, Kazuhiro Nakadai, Tetsuya Ogata

Figure 1 for CNN-based MultiChannel End-to-End Speech Recognition for everyday home environments
Figure 2 for CNN-based MultiChannel End-to-End Speech Recognition for everyday home environments
Figure 3 for CNN-based MultiChannel End-to-End Speech Recognition for everyday home environments
Figure 4 for CNN-based MultiChannel End-to-End Speech Recognition for everyday home environments
Viaarxiv icon

Cycle-consistency training for end-to-end speech recognition

Add code
Bookmark button
Alert button
Nov 02, 2018
Takaaki Hori, Ramon Astudillo, Tomoki Hayashi, Yu Zhang, Shinji Watanabe, Jonathan Le Roux

Figure 1 for Cycle-consistency training for end-to-end speech recognition
Figure 2 for Cycle-consistency training for end-to-end speech recognition
Figure 3 for Cycle-consistency training for end-to-end speech recognition
Figure 4 for Cycle-consistency training for end-to-end speech recognition
Viaarxiv icon

End-to-end Speech Recognition with Word-based RNN Language Models

Add code
Bookmark button
Alert button
Aug 08, 2018
Takaaki Hori, Jaejin Cho, Shinji Watanabe

Figure 1 for End-to-end Speech Recognition with Word-based RNN Language Models
Figure 2 for End-to-end Speech Recognition with Word-based RNN Language Models
Figure 3 for End-to-end Speech Recognition with Word-based RNN Language Models
Figure 4 for End-to-end Speech Recognition with Word-based RNN Language Models
Viaarxiv icon