Alert button
Picture for Heiga Zen

Heiga Zen

Alert button

Non-Attentive Tacotron: Robust and Controllable Neural TTS Synthesis Including Unsupervised Duration Modeling

Add code
Bookmark button
Alert button
Oct 08, 2020
Jonathan Shen, Ye Jia, Mike Chrzanowski, Yu Zhang, Isaac Elias, Heiga Zen, Yonghui Wu

Figure 1 for Non-Attentive Tacotron: Robust and Controllable Neural TTS Synthesis Including Unsupervised Duration Modeling
Figure 2 for Non-Attentive Tacotron: Robust and Controllable Neural TTS Synthesis Including Unsupervised Duration Modeling
Figure 3 for Non-Attentive Tacotron: Robust and Controllable Neural TTS Synthesis Including Unsupervised Duration Modeling
Figure 4 for Non-Attentive Tacotron: Robust and Controllable Neural TTS Synthesis Including Unsupervised Duration Modeling
Viaarxiv icon

WaveGrad: Estimating Gradients for Waveform Generation

Add code
Bookmark button
Alert button
Sep 02, 2020
Nanxin Chen, Yu Zhang, Heiga Zen, Ron J. Weiss, Mohammad Norouzi, William Chan

Figure 1 for WaveGrad: Estimating Gradients for Waveform Generation
Figure 2 for WaveGrad: Estimating Gradients for Waveform Generation
Figure 3 for WaveGrad: Estimating Gradients for Waveform Generation
Figure 4 for WaveGrad: Estimating Gradients for Waveform Generation
Viaarxiv icon

Fully-hierarchical fine-grained prosody modeling for interpretable speech synthesis

Add code
Bookmark button
Alert button
Feb 06, 2020
Guangzhi Sun, Yu Zhang, Ron J. Weiss, Yuan Cao, Heiga Zen, Yonghui Wu

Figure 1 for Fully-hierarchical fine-grained prosody modeling for interpretable speech synthesis
Figure 2 for Fully-hierarchical fine-grained prosody modeling for interpretable speech synthesis
Figure 3 for Fully-hierarchical fine-grained prosody modeling for interpretable speech synthesis
Figure 4 for Fully-hierarchical fine-grained prosody modeling for interpretable speech synthesis
Viaarxiv icon

Generating diverse and natural text-to-speech samples using a quantized fine-grained VAE and auto-regressive prosody prior

Add code
Bookmark button
Alert button
Feb 06, 2020
Guangzhi Sun, Yu Zhang, Ron J. Weiss, Yuan Cao, Heiga Zen, Andrew Rosenberg, Bhuvana Ramabhadran, Yonghui Wu

Figure 1 for Generating diverse and natural text-to-speech samples using a quantized fine-grained VAE and auto-regressive prosody prior
Figure 2 for Generating diverse and natural text-to-speech samples using a quantized fine-grained VAE and auto-regressive prosody prior
Figure 3 for Generating diverse and natural text-to-speech samples using a quantized fine-grained VAE and auto-regressive prosody prior
Figure 4 for Generating diverse and natural text-to-speech samples using a quantized fine-grained VAE and auto-regressive prosody prior
Viaarxiv icon

Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning

Add code
Bookmark button
Alert button
Jul 24, 2019
Yu Zhang, Ron J. Weiss, Heiga Zen, Yonghui Wu, Zhifeng Chen, RJ Skerry-Ryan, Ye Jia, Andrew Rosenberg, Bhuvana Ramabhadran

Figure 1 for Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning
Figure 2 for Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning
Figure 3 for Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning
Figure 4 for Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning
Viaarxiv icon

Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling

Add code
Bookmark button
Alert button
Feb 21, 2019
Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X. Chen, Ye Jia, Anjuli Kannan, Tara Sainath, Yuan Cao, Chung-Cheng Chiu, Yanzhang He, Jan Chorowski, Smit Hinsu, Stella Laurenzo, James Qin, Orhan Firat, Wolfgang Macherey, Suyog Gupta, Ankur Bapna, Shuyuan Zhang, Ruoming Pang, Ron J. Weiss, Rohit Prabhavalkar, Qiao Liang, Benoit Jacob, Bowen Liang, HyoukJoong Lee, Ciprian Chelba, Sébastien Jean, Bo Li, Melvin Johnson, Rohan Anil, Rajat Tibrewal, Xiaobing Liu, Akiko Eriguchi, Navdeep Jaitly, Naveen Ari, Colin Cherry, Parisa Haghani, Otavio Good, Youlong Cheng, Raziel Alvarez, Isaac Caswell, Wei-Ning Hsu, Zongheng Yang, Kuan-Chieh Wang, Ekaterina Gonina, Katrin Tomanek, Ben Vanik, Zelin Wu, Llion Jones, Mike Schuster, Yanping Huang, Dehao Chen, Kazuki Irie, George Foster, John Richardson, Klaus Macherey, Antoine Bruguier, Heiga Zen, Colin Raffel, Shankar Kumar, Kanishka Rao, David Rybach, Matthew Murray, Vijayaditya Peddinti, Maxim Krikun, Michiel A. U. Bacchiani, Thomas B. Jablin, Rob Suderman, Ian Williams, Benjamin Lee, Deepti Bhatia, Justin Carlson, Semih Yavuz, Yu Zhang, Ian McGraw, Max Galkin, Qi Ge, Golan Pundak, Chad Whipkey, Todd Wang, Uri Alon, Dmitry Lepikhin, Ye Tian, Sara Sabour, William Chan, Shubham Toshniwal, Baohua Liao, Michael Nirschl, Pat Rondon

Figure 1 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Figure 2 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Figure 3 for Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling
Viaarxiv icon

Hierarchical Generative Modeling for Controllable Speech Synthesis

Add code
Bookmark button
Alert button
Oct 16, 2018
Wei-Ning Hsu, Yu Zhang, Ron J. Weiss, Heiga Zen, Yonghui Wu, Yuxuan Wang, Yuan Cao, Ye Jia, Zhifeng Chen, Jonathan Shen, Patrick Nguyen, Ruoming Pang

Figure 1 for Hierarchical Generative Modeling for Controllable Speech Synthesis
Figure 2 for Hierarchical Generative Modeling for Controllable Speech Synthesis
Figure 3 for Hierarchical Generative Modeling for Controllable Speech Synthesis
Figure 4 for Hierarchical Generative Modeling for Controllable Speech Synthesis
Viaarxiv icon

Sample Efficient Adaptive Text-to-Speech

Add code
Bookmark button
Alert button
Sep 27, 2018
Yutian Chen, Yannis Assael, Brendan Shillingford, David Budden, Scott Reed, Heiga Zen, Quan Wang, Luis C. Cobo, Andrew Trask, Ben Laurie, Caglar Gulcehre, Aäron van den Oord, Oriol Vinyals, Nando de Freitas

Figure 1 for Sample Efficient Adaptive Text-to-Speech
Figure 2 for Sample Efficient Adaptive Text-to-Speech
Figure 3 for Sample Efficient Adaptive Text-to-Speech
Figure 4 for Sample Efficient Adaptive Text-to-Speech
Viaarxiv icon

Parallel WaveNet: Fast High-Fidelity Speech Synthesis

Add code
Bookmark button
Alert button
Nov 28, 2017
Aaron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lockhart, Luis C. Cobo, Florian Stimberg, Norman Casagrande, Dominik Grewe, Seb Noury, Sander Dieleman, Erich Elsen, Nal Kalchbrenner, Heiga Zen, Alex Graves, Helen King, Tom Walters, Dan Belov, Demis Hassabis

Figure 1 for Parallel WaveNet: Fast High-Fidelity Speech Synthesis
Figure 2 for Parallel WaveNet: Fast High-Fidelity Speech Synthesis
Figure 3 for Parallel WaveNet: Fast High-Fidelity Speech Synthesis
Figure 4 for Parallel WaveNet: Fast High-Fidelity Speech Synthesis
Viaarxiv icon