Alert button
Picture for Masato Mimura

Masato Mimura

Alert button

Time-domain Speech Enhancement Assisted by Multi-resolution Frequency Encoder and Decoder

Add code
Bookmark button
Alert button
Mar 26, 2023
Hao Shi, Masato Mimura, Longbiao Wang, Jianwu Dang, Tatsuya Kawahara

Figure 1 for Time-domain Speech Enhancement Assisted by Multi-resolution Frequency Encoder and Decoder
Figure 2 for Time-domain Speech Enhancement Assisted by Multi-resolution Frequency Encoder and Decoder
Figure 3 for Time-domain Speech Enhancement Assisted by Multi-resolution Frequency Encoder and Decoder
Figure 4 for Time-domain Speech Enhancement Assisted by Multi-resolution Frequency Encoder and Decoder
Viaarxiv icon

Non-autoregressive Error Correction for CTC-based ASR with Phone-conditioned Masked LM

Add code
Bookmark button
Alert button
Sep 08, 2022
Hayato Futami, Hirofumi Inaguma, Sei Ueno, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara

Figure 1 for Non-autoregressive Error Correction for CTC-based ASR with Phone-conditioned Masked LM
Figure 2 for Non-autoregressive Error Correction for CTC-based ASR with Phone-conditioned Masked LM
Figure 3 for Non-autoregressive Error Correction for CTC-based ASR with Phone-conditioned Masked LM
Figure 4 for Non-autoregressive Error Correction for CTC-based ASR with Phone-conditioned Masked LM
Viaarxiv icon

Distilling the Knowledge of BERT for CTC-based ASR

Add code
Bookmark button
Alert button
Sep 05, 2022
Hayato Futami, Hirofumi Inaguma, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara

Figure 1 for Distilling the Knowledge of BERT for CTC-based ASR
Figure 2 for Distilling the Knowledge of BERT for CTC-based ASR
Figure 3 for Distilling the Knowledge of BERT for CTC-based ASR
Figure 4 for Distilling the Knowledge of BERT for CTC-based ASR
Viaarxiv icon

ASR Rescoring and Confidence Estimation with ELECTRA

Add code
Bookmark button
Alert button
Oct 05, 2021
Hayato Futami, Hirofumi Inaguma, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara

Figure 1 for ASR Rescoring and Confidence Estimation with ELECTRA
Figure 2 for ASR Rescoring and Confidence Estimation with ELECTRA
Figure 3 for ASR Rescoring and Confidence Estimation with ELECTRA
Figure 4 for ASR Rescoring and Confidence Estimation with ELECTRA
Viaarxiv icon

Distilling the Knowledge of BERT for Sequence-to-Sequence ASR

Add code
Bookmark button
Alert button
Aug 09, 2020
Hayato Futami, Hirofumi Inaguma, Sei Ueno, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara

Figure 1 for Distilling the Knowledge of BERT for Sequence-to-Sequence ASR
Figure 2 for Distilling the Knowledge of BERT for Sequence-to-Sequence ASR
Figure 3 for Distilling the Knowledge of BERT for Sequence-to-Sequence ASR
Figure 4 for Distilling the Knowledge of BERT for Sequence-to-Sequence ASR
Viaarxiv icon

Enhancing Monotonic Multihead Attention for Streaming ASR

Add code
Bookmark button
Alert button
May 23, 2020
Hirofumi Inaguma, Masato Mimura, Tatsuya Kawahara

Figure 1 for Enhancing Monotonic Multihead Attention for Streaming ASR
Figure 2 for Enhancing Monotonic Multihead Attention for Streaming ASR
Figure 3 for Enhancing Monotonic Multihead Attention for Streaming ASR
Figure 4 for Enhancing Monotonic Multihead Attention for Streaming ASR
Viaarxiv icon

Generative Adversarial Training Data Adaptation for Very Low-resource Automatic Speech Recognition

Add code
Bookmark button
Alert button
May 19, 2020
Kohei Matsuura, Masato Mimura, Shinsuke Sakai, Tatsuya Kawahara

Figure 1 for Generative Adversarial Training Data Adaptation for Very Low-resource Automatic Speech Recognition
Figure 2 for Generative Adversarial Training Data Adaptation for Very Low-resource Automatic Speech Recognition
Figure 3 for Generative Adversarial Training Data Adaptation for Very Low-resource Automatic Speech Recognition
Figure 4 for Generative Adversarial Training Data Adaptation for Very Low-resource Automatic Speech Recognition
Viaarxiv icon

CTC-synchronous Training for Monotonic Attention Model

Add code
Bookmark button
Alert button
May 17, 2020
Hirofumi Inaguma, Masato Mimura, Tatsuya Kawahara

Figure 1 for CTC-synchronous Training for Monotonic Attention Model
Figure 2 for CTC-synchronous Training for Monotonic Attention Model
Figure 3 for CTC-synchronous Training for Monotonic Attention Model
Figure 4 for CTC-synchronous Training for Monotonic Attention Model
Viaarxiv icon