Alert button
Picture for Junichi Yamagishi

Junichi Yamagishi

Alert button

SVSNet: An End-to-end Speaker Voice Similarity Assessment Model

Add code
Bookmark button
Alert button
Jul 20, 2021
Cheng-Hung Hu, Yu-Huai Peng, Junichi Yamagishi, Yu Tsao, Hsin-Min Wang

Figure 1 for SVSNet: An End-to-end Speaker Voice Similarity Assessment Model
Figure 2 for SVSNet: An End-to-end Speaker Voice Similarity Assessment Model
Figure 3 for SVSNet: An End-to-end Speaker Voice Similarity Assessment Model
Figure 4 for SVSNet: An End-to-end Speaker Voice Similarity Assessment Model
Viaarxiv icon

Preliminary study on using vector quantization latent spaces for TTS/VC systems with consistent performance

Add code
Bookmark button
Alert button
Jun 25, 2021
Hieu-Thi Luong, Junichi Yamagishi

Figure 1 for Preliminary study on using vector quantization latent spaces for TTS/VC systems with consistent performance
Figure 2 for Preliminary study on using vector quantization latent spaces for TTS/VC systems with consistent performance
Figure 3 for Preliminary study on using vector quantization latent spaces for TTS/VC systems with consistent performance
Figure 4 for Preliminary study on using vector quantization latent spaces for TTS/VC systems with consistent performance
Viaarxiv icon

Visualizing Classifier Adjacency Relations: A Case Study in Speaker Verification and Voice Anti-Spoofing

Add code
Bookmark button
Alert button
Jun 11, 2021
Tomi Kinnunen, Andreas Nautsch, Md Sahidullah, Nicholas Evans, Xin Wang, Massimiliano Todisco, Héctor Delgado, Junichi Yamagishi, Kong Aik Lee

Figure 1 for Visualizing Classifier Adjacency Relations: A Case Study in Speaker Verification and Voice Anti-Spoofing
Figure 2 for Visualizing Classifier Adjacency Relations: A Case Study in Speaker Verification and Voice Anti-Spoofing
Figure 3 for Visualizing Classifier Adjacency Relations: A Case Study in Speaker Verification and Voice Anti-Spoofing
Figure 4 for Visualizing Classifier Adjacency Relations: A Case Study in Speaker Verification and Voice Anti-Spoofing
Viaarxiv icon

A Multi-Level Attention Model for Evidence-Based Fact Checking

Add code
Bookmark button
Alert button
Jun 02, 2021
Canasai Kruengkrai, Junichi Yamagishi, Xin Wang

Figure 1 for A Multi-Level Attention Model for Evidence-Based Fact Checking
Figure 2 for A Multi-Level Attention Model for Evidence-Based Fact Checking
Figure 3 for A Multi-Level Attention Model for Evidence-Based Fact Checking
Figure 4 for A Multi-Level Attention Model for Evidence-Based Fact Checking
Viaarxiv icon

Text-to-Speech Synthesis Techniques for MIDI-to-Audio Synthesis

Add code
Bookmark button
Alert button
May 17, 2021
Erica Cooper, Xin Wang, Junichi Yamagishi

Figure 1 for Text-to-Speech Synthesis Techniques for MIDI-to-Audio Synthesis
Figure 2 for Text-to-Speech Synthesis Techniques for MIDI-to-Audio Synthesis
Figure 3 for Text-to-Speech Synthesis Techniques for MIDI-to-Audio Synthesis
Figure 4 for Text-to-Speech Synthesis Techniques for MIDI-to-Audio Synthesis
Viaarxiv icon

How do Voices from Past Speech Synthesis Challenges Compare Today?

Add code
Bookmark button
Alert button
May 13, 2021
Erica Cooper, Junichi Yamagishi

Figure 1 for How do Voices from Past Speech Synthesis Challenges Compare Today?
Figure 2 for How do Voices from Past Speech Synthesis Challenges Compare Today?
Figure 3 for How do Voices from Past Speech Synthesis Challenges Compare Today?
Figure 4 for How do Voices from Past Speech Synthesis Challenges Compare Today?
Viaarxiv icon

Exploring Disentanglement with Multilingual and Monolingual VQ-VAE

Add code
Bookmark button
Alert button
May 04, 2021
Jennifer Williams, Jason Fong, Erica Cooper, Junichi Yamagishi

Figure 1 for Exploring Disentanglement with Multilingual and Monolingual VQ-VAE
Figure 2 for Exploring Disentanglement with Multilingual and Monolingual VQ-VAE
Figure 3 for Exploring Disentanglement with Multilingual and Monolingual VQ-VAE
Figure 4 for Exploring Disentanglement with Multilingual and Monolingual VQ-VAE
Viaarxiv icon