Alert button
Picture for Hirokazu Kameoka

Hirokazu Kameoka

Alert button

CycleGAN-VC2: Improved CycleGAN-based Non-parallel Voice Conversion

Apr 09, 2019
Takuhiro Kaneko, Hirokazu Kameoka, Kou Tanaka, Nobukatsu Hojo

Figure 1 for CycleGAN-VC2: Improved CycleGAN-based Non-parallel Voice Conversion
Figure 2 for CycleGAN-VC2: Improved CycleGAN-based Non-parallel Voice Conversion
Figure 3 for CycleGAN-VC2: Improved CycleGAN-based Non-parallel Voice Conversion
Figure 4 for CycleGAN-VC2: Improved CycleGAN-based Non-parallel Voice Conversion
Viaarxiv icon

Crossmodal Voice Conversion

Apr 09, 2019
Hirokazu Kameoka, Kou Tanaka, Aaron Valero Puche, Yasunori Ohishi, Takuhiro Kaneko

Figure 1 for Crossmodal Voice Conversion
Figure 2 for Crossmodal Voice Conversion
Figure 3 for Crossmodal Voice Conversion
Figure 4 for Crossmodal Voice Conversion
Viaarxiv icon

WaveCycleGAN2: Time-domain Neural Post-filter for Speech Waveform Generation

Apr 09, 2019
Kou Tanaka, Hirokazu Kameoka, Takuhiro Kaneko, Nobukatsu Hojo

Figure 1 for WaveCycleGAN2: Time-domain Neural Post-filter for Speech Waveform Generation
Figure 2 for WaveCycleGAN2: Time-domain Neural Post-filter for Speech Waveform Generation
Figure 3 for WaveCycleGAN2: Time-domain Neural Post-filter for Speech Waveform Generation
Figure 4 for WaveCycleGAN2: Time-domain Neural Post-filter for Speech Waveform Generation
Viaarxiv icon

Training a Neural Speech Waveform Model using Spectral Losses of Short-Time Fourier Transform and Continuous Wavelet Transform

Apr 07, 2019
Shinji Takaki, Hirokazu Kameoka, Junichi Yamagishi

Figure 1 for Training a Neural Speech Waveform Model using Spectral Losses of Short-Time Fourier Transform and Continuous Wavelet Transform
Figure 2 for Training a Neural Speech Waveform Model using Spectral Losses of Short-Time Fourier Transform and Continuous Wavelet Transform
Figure 3 for Training a Neural Speech Waveform Model using Spectral Losses of Short-Time Fourier Transform and Continuous Wavelet Transform
Figure 4 for Training a Neural Speech Waveform Model using Spectral Losses of Short-Time Fourier Transform and Continuous Wavelet Transform
Viaarxiv icon

Fast MVAE: Joint separation and classification of mixed sources based on multichannel variational autoencoder with auxiliary classifier

Dec 16, 2018
Li Li, Hirokazu Kameoka, Shoji Makino

Figure 1 for Fast MVAE: Joint separation and classification of mixed sources based on multichannel variational autoencoder with auxiliary classifier
Figure 2 for Fast MVAE: Joint separation and classification of mixed sources based on multichannel variational autoencoder with auxiliary classifier
Figure 3 for Fast MVAE: Joint separation and classification of mixed sources based on multichannel variational autoencoder with auxiliary classifier
Figure 4 for Fast MVAE: Joint separation and classification of mixed sources based on multichannel variational autoencoder with auxiliary classifier
Viaarxiv icon

AttS2S-VC: Sequence-to-Sequence Voice Conversion with Attention and Context Preservation Mechanisms

Nov 09, 2018
Kou Tanaka, Hirokazu Kameoka, Takuhiro Kaneko, Nobukatsu Hojo

Figure 1 for AttS2S-VC: Sequence-to-Sequence Voice Conversion with Attention and Context Preservation Mechanisms
Figure 2 for AttS2S-VC: Sequence-to-Sequence Voice Conversion with Attention and Context Preservation Mechanisms
Figure 3 for AttS2S-VC: Sequence-to-Sequence Voice Conversion with Attention and Context Preservation Mechanisms
Viaarxiv icon

ConvS2S-VC: Fully convolutional sequence-to-sequence voice conversion

Nov 05, 2018
Hirokazu Kameoka, Kou Tanaka, Takuhiro Kaneko, Nobukatsu Hojo

Figure 1 for ConvS2S-VC: Fully convolutional sequence-to-sequence voice conversion
Figure 2 for ConvS2S-VC: Fully convolutional sequence-to-sequence voice conversion
Figure 3 for ConvS2S-VC: Fully convolutional sequence-to-sequence voice conversion
Viaarxiv icon

Generalized Multichannel Variational Autoencoder for Underdetermined Source Separation

Sep 29, 2018
Shogo Seki, Hirokazu Kameoka, Li Li, Tomoki Toda, Kazuya Takeda

Figure 1 for Generalized Multichannel Variational Autoencoder for Underdetermined Source Separation
Figure 2 for Generalized Multichannel Variational Autoencoder for Underdetermined Source Separation
Figure 3 for Generalized Multichannel Variational Autoencoder for Underdetermined Source Separation
Viaarxiv icon

WaveCycleGAN: Synthetic-to-natural speech waveform conversion using cycle-consistent adversarial networks

Sep 28, 2018
Kou Tanaka, Takuhiro Kaneko, Nobukatsu Hojo, Hirokazu Kameoka

Figure 1 for WaveCycleGAN: Synthetic-to-natural speech waveform conversion using cycle-consistent adversarial networks
Figure 2 for WaveCycleGAN: Synthetic-to-natural speech waveform conversion using cycle-consistent adversarial networks
Figure 3 for WaveCycleGAN: Synthetic-to-natural speech waveform conversion using cycle-consistent adversarial networks
Figure 4 for WaveCycleGAN: Synthetic-to-natural speech waveform conversion using cycle-consistent adversarial networks
Viaarxiv icon

ACVAE-VC: Non-parallel many-to-many voice conversion with auxiliary classifier variational autoencoder

Aug 26, 2018
Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka, Nobukatsu Hojo

Figure 1 for ACVAE-VC: Non-parallel many-to-many voice conversion with auxiliary classifier variational autoencoder
Figure 2 for ACVAE-VC: Non-parallel many-to-many voice conversion with auxiliary classifier variational autoencoder
Figure 3 for ACVAE-VC: Non-parallel many-to-many voice conversion with auxiliary classifier variational autoencoder
Viaarxiv icon