Alert button
Picture for Ron J. Weiss

Ron J. Weiss

Alert button

G-Augment: Searching For The Meta-Structure Of Data Augmentation Policies For ASR

Oct 19, 2022
Gary Wang, Ekin D. Cubuk, Andrew Rosenberg, Shuyang Cheng, Ron J. Weiss, Bhuvana Ramabhadran, Pedro J. Moreno, Quoc V. Le, Daniel S. Park

Figure 1 for G-Augment: Searching For The Meta-Structure Of Data Augmentation Policies For ASR
Figure 2 for G-Augment: Searching For The Meta-Structure Of Data Augmentation Policies For ASR
Figure 3 for G-Augment: Searching For The Meta-Structure Of Data Augmentation Policies For ASR
Figure 4 for G-Augment: Searching For The Meta-Structure Of Data Augmentation Policies For ASR
Viaarxiv icon

WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis

Jun 19, 2021
Nanxin Chen, Yu Zhang, Heiga Zen, Ron J. Weiss, Mohammad Norouzi, Najim Dehak, William Chan

Figure 1 for WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis
Figure 2 for WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis
Figure 3 for WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis
Figure 4 for WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis
Viaarxiv icon

Sparse, Efficient, and Semantic Mixture Invariant Training: Taming In-the-Wild Unsupervised Sound Separation

Jun 01, 2021
Scott Wisdom, Aren Jansen, Ron J. Weiss, Hakan Erdogan, John R. Hershey

Figure 1 for Sparse, Efficient, and Semantic Mixture Invariant Training: Taming In-the-Wild Unsupervised Sound Separation
Figure 2 for Sparse, Efficient, and Semantic Mixture Invariant Training: Taming In-the-Wild Unsupervised Sound Separation
Viaarxiv icon

Wave-Tacotron: Spectrogram-free end-to-end text-to-speech synthesis

Nov 06, 2020
Ron J. Weiss, RJ Skerry-Ryan, Eric Battenberg, Soroosh Mariooryad, Diederik P. Kingma

Figure 1 for Wave-Tacotron: Spectrogram-free end-to-end text-to-speech synthesis
Figure 2 for Wave-Tacotron: Spectrogram-free end-to-end text-to-speech synthesis
Figure 3 for Wave-Tacotron: Spectrogram-free end-to-end text-to-speech synthesis
Figure 4 for Wave-Tacotron: Spectrogram-free end-to-end text-to-speech synthesis
Viaarxiv icon

Multitask Training with Text Data for End-to-End Speech Recognition

Oct 27, 2020
Peidong Wang, Tara N. Sainath, Ron J. Weiss

Figure 1 for Multitask Training with Text Data for End-to-End Speech Recognition
Figure 2 for Multitask Training with Text Data for End-to-End Speech Recognition
Figure 3 for Multitask Training with Text Data for End-to-End Speech Recognition
Figure 4 for Multitask Training with Text Data for End-to-End Speech Recognition
Viaarxiv icon

WaveGrad: Estimating Gradients for Waveform Generation

Sep 02, 2020
Nanxin Chen, Yu Zhang, Heiga Zen, Ron J. Weiss, Mohammad Norouzi, William Chan

Figure 1 for WaveGrad: Estimating Gradients for Waveform Generation
Figure 2 for WaveGrad: Estimating Gradients for Waveform Generation
Figure 3 for WaveGrad: Estimating Gradients for Waveform Generation
Figure 4 for WaveGrad: Estimating Gradients for Waveform Generation
Viaarxiv icon

Unsupervised Sound Separation Using Mixtures of Mixtures

Jun 23, 2020
Scott Wisdom, Efthymios Tzinis, Hakan Erdogan, Ron J. Weiss, Kevin Wilson, John R. Hershey

Figure 1 for Unsupervised Sound Separation Using Mixtures of Mixtures
Figure 2 for Unsupervised Sound Separation Using Mixtures of Mixtures
Figure 3 for Unsupervised Sound Separation Using Mixtures of Mixtures
Figure 4 for Unsupervised Sound Separation Using Mixtures of Mixtures
Viaarxiv icon

Fully-hierarchical fine-grained prosody modeling for interpretable speech synthesis

Feb 06, 2020
Guangzhi Sun, Yu Zhang, Ron J. Weiss, Yuan Cao, Heiga Zen, Yonghui Wu

Figure 1 for Fully-hierarchical fine-grained prosody modeling for interpretable speech synthesis
Figure 2 for Fully-hierarchical fine-grained prosody modeling for interpretable speech synthesis
Figure 3 for Fully-hierarchical fine-grained prosody modeling for interpretable speech synthesis
Figure 4 for Fully-hierarchical fine-grained prosody modeling for interpretable speech synthesis
Viaarxiv icon

Generating diverse and natural text-to-speech samples using a quantized fine-grained VAE and auto-regressive prosody prior

Feb 06, 2020
Guangzhi Sun, Yu Zhang, Ron J. Weiss, Yuan Cao, Heiga Zen, Andrew Rosenberg, Bhuvana Ramabhadran, Yonghui Wu

Figure 1 for Generating diverse and natural text-to-speech samples using a quantized fine-grained VAE and auto-regressive prosody prior
Figure 2 for Generating diverse and natural text-to-speech samples using a quantized fine-grained VAE and auto-regressive prosody prior
Figure 3 for Generating diverse and natural text-to-speech samples using a quantized fine-grained VAE and auto-regressive prosody prior
Figure 4 for Generating diverse and natural text-to-speech samples using a quantized fine-grained VAE and auto-regressive prosody prior
Viaarxiv icon

Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning

Jul 24, 2019
Yu Zhang, Ron J. Weiss, Heiga Zen, Yonghui Wu, Zhifeng Chen, RJ Skerry-Ryan, Ye Jia, Andrew Rosenberg, Bhuvana Ramabhadran

Figure 1 for Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning
Figure 2 for Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning
Figure 3 for Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning
Figure 4 for Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning
Viaarxiv icon