Alert button
Picture for Michiel Bacchiani

Michiel Bacchiani

Alert button

LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus

Add code
Bookmark button
Alert button
May 30, 2023
Yuma Koizumi, Heiga Zen, Shigeki Karita, Yifan Ding, Kohei Yatabe, Nobuyuki Morioka, Michiel Bacchiani, Yu Zhang, Wei Han, Ankur Bapna

Figure 1 for LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus
Figure 2 for LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus
Figure 3 for LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus
Figure 4 for LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus
Viaarxiv icon

Miipher: A Robust Speech Restoration Model Integrating Self-Supervised Speech and Text Representations

Add code
Bookmark button
Alert button
Mar 03, 2023
Yuma Koizumi, Heiga Zen, Shigeki Karita, Yifan Ding, Kohei Yatabe, Nobuyuki Morioka, Yu Zhang, Wei Han, Ankur Bapna, Michiel Bacchiani

Figure 1 for Miipher: A Robust Speech Restoration Model Integrating Self-Supervised Speech and Text Representations
Figure 2 for Miipher: A Robust Speech Restoration Model Integrating Self-Supervised Speech and Text Representations
Figure 3 for Miipher: A Robust Speech Restoration Model Integrating Self-Supervised Speech and Text Representations
Figure 4 for Miipher: A Robust Speech Restoration Model Integrating Self-Supervised Speech and Text Representations
Viaarxiv icon

WaveFit: An Iterative and Non-autoregressive Neural Vocoder based on Fixed-Point Iteration

Add code
Bookmark button
Alert button
Oct 03, 2022
Yuma Koizumi, Kohei Yatabe, Heiga Zen, Michiel Bacchiani

Figure 1 for WaveFit: An Iterative and Non-autoregressive Neural Vocoder based on Fixed-Point Iteration
Figure 2 for WaveFit: An Iterative and Non-autoregressive Neural Vocoder based on Fixed-Point Iteration
Figure 3 for WaveFit: An Iterative and Non-autoregressive Neural Vocoder based on Fixed-Point Iteration
Figure 4 for WaveFit: An Iterative and Non-autoregressive Neural Vocoder based on Fixed-Point Iteration
Viaarxiv icon

SpecGrad: Diffusion Probabilistic Model based Neural Vocoder with Adaptive Noise Spectral Shaping

Add code
Bookmark button
Alert button
Mar 31, 2022
Yuma Koizumi, Heiga Zen, Kohei Yatabe, Nanxin Chen, Michiel Bacchiani

Figure 1 for SpecGrad: Diffusion Probabilistic Model based Neural Vocoder with Adaptive Noise Spectral Shaping
Figure 2 for SpecGrad: Diffusion Probabilistic Model based Neural Vocoder with Adaptive Noise Spectral Shaping
Figure 3 for SpecGrad: Diffusion Probabilistic Model based Neural Vocoder with Adaptive Noise Spectral Shaping
Figure 4 for SpecGrad: Diffusion Probabilistic Model based Neural Vocoder with Adaptive Noise Spectral Shaping
Viaarxiv icon

Knowledge Transfer from Large-scale Pretrained Language Models to End-to-end Speech Recognizers

Add code
Bookmark button
Alert button
Feb 16, 2022
Yotaro Kubo, Shigeki Karita, Michiel Bacchiani

Figure 1 for Knowledge Transfer from Large-scale Pretrained Language Models to End-to-end Speech Recognizers
Figure 2 for Knowledge Transfer from Large-scale Pretrained Language Models to End-to-end Speech Recognizers
Figure 3 for Knowledge Transfer from Large-scale Pretrained Language Models to End-to-end Speech Recognizers
Figure 4 for Knowledge Transfer from Large-scale Pretrained Language Models to End-to-end Speech Recognizers
Viaarxiv icon

SNRi Target Training for Joint Speech Enhancement and Recognition

Add code
Bookmark button
Alert button
Nov 01, 2021
Yuma Koizumi, Shigeki Karita, Arun Narayanan, Sankaran Panchapagesan, Michiel Bacchiani

Figure 1 for SNRi Target Training for Joint Speech Enhancement and Recognition
Figure 2 for SNRi Target Training for Joint Speech Enhancement and Recognition
Figure 3 for SNRi Target Training for Joint Speech Enhancement and Recognition
Figure 4 for SNRi Target Training for Joint Speech Enhancement and Recognition
Viaarxiv icon

DF-Conformer: Integrated architecture of Conv-TasNet and Conformer using linear complexity self-attention for speech enhancement

Add code
Bookmark button
Alert button
Jun 30, 2021
Yuma Koizumi, Shigeki Karita, Scott Wisdom, Hakan Erdogan, John R. Hershey, Llion Jones, Michiel Bacchiani

Figure 1 for DF-Conformer: Integrated architecture of Conv-TasNet and Conformer using linear complexity self-attention for speech enhancement
Figure 2 for DF-Conformer: Integrated architecture of Conv-TasNet and Conformer using linear complexity self-attention for speech enhancement
Figure 3 for DF-Conformer: Integrated architecture of Conv-TasNet and Conformer using linear complexity self-attention for speech enhancement
Figure 4 for DF-Conformer: Integrated architecture of Conv-TasNet and Conformer using linear complexity self-attention for speech enhancement
Viaarxiv icon

From Audio to Semantics: Approaches to end-to-end spoken language understanding

Add code
Bookmark button
Alert button
Sep 24, 2018
Parisa Haghani, Arun Narayanan, Michiel Bacchiani, Galen Chuang, Neeraj Gaur, Pedro Moreno, Rohit Prabhavalkar, Zhongdi Qu, Austin Waters

Figure 1 for From Audio to Semantics: Approaches to end-to-end spoken language understanding
Figure 2 for From Audio to Semantics: Approaches to end-to-end spoken language understanding
Figure 3 for From Audio to Semantics: Approaches to end-to-end spoken language understanding
Figure 4 for From Audio to Semantics: Approaches to end-to-end spoken language understanding
Viaarxiv icon

Toward domain-invariant speech recognition via large scale training

Add code
Bookmark button
Alert button
Aug 16, 2018
Arun Narayanan, Ananya Misra, Khe Chai Sim, Golan Pundak, Anshuman Tripathi, Mohamed Elfeky, Parisa Haghani, Trevor Strohman, Michiel Bacchiani

Figure 1 for Toward domain-invariant speech recognition via large scale training
Figure 2 for Toward domain-invariant speech recognition via large scale training
Figure 3 for Toward domain-invariant speech recognition via large scale training
Figure 4 for Toward domain-invariant speech recognition via large scale training
Viaarxiv icon

State-of-the-art Speech Recognition With Sequence-to-Sequence Models

Add code
Bookmark button
Alert button
Feb 23, 2018
Chung-Cheng Chiu, Tara N. Sainath, Yonghui Wu, Rohit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J. Weiss, Kanishka Rao, Ekaterina Gonina, Navdeep Jaitly, Bo Li, Jan Chorowski, Michiel Bacchiani

Figure 1 for State-of-the-art Speech Recognition With Sequence-to-Sequence Models
Figure 2 for State-of-the-art Speech Recognition With Sequence-to-Sequence Models
Figure 3 for State-of-the-art Speech Recognition With Sequence-to-Sequence Models
Figure 4 for State-of-the-art Speech Recognition With Sequence-to-Sequence Models
Viaarxiv icon