Alert button
Picture for Junichi Yamagishi

Junichi Yamagishi

Alert button

Fashion-Guided Adversarial Attack on Person Segmentation

Add code
Bookmark button
Alert button
Apr 20, 2021
Marc Treu, Trung-Nghia Le, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen

Figure 1 for Fashion-Guided Adversarial Attack on Person Segmentation
Figure 2 for Fashion-Guided Adversarial Attack on Person Segmentation
Figure 3 for Fashion-Guided Adversarial Attack on Person Segmentation
Figure 4 for Fashion-Guided Adversarial Attack on Person Segmentation
Viaarxiv icon

Multi-Metric Optimization using Generative Adversarial Networks for Near-End Speech Intelligibility Enhancement

Add code
Bookmark button
Alert button
Apr 17, 2021
Haoyu Li, Junichi Yamagishi

Figure 1 for Multi-Metric Optimization using Generative Adversarial Networks for Near-End Speech Intelligibility Enhancement
Figure 2 for Multi-Metric Optimization using Generative Adversarial Networks for Near-End Speech Intelligibility Enhancement
Figure 3 for Multi-Metric Optimization using Generative Adversarial Networks for Near-End Speech Intelligibility Enhancement
Figure 4 for Multi-Metric Optimization using Generative Adversarial Networks for Near-End Speech Intelligibility Enhancement
Viaarxiv icon

An Initial Investigation for Detecting Partially Spoofed Audio

Add code
Bookmark button
Alert button
Apr 06, 2021
Lin Zhang, Xin Wang, Erica Cooper, Junichi Yamagishi, Jose Patino, Nicholas Evans

Figure 1 for An Initial Investigation for Detecting Partially Spoofed Audio
Figure 2 for An Initial Investigation for Detecting Partially Spoofed Audio
Figure 3 for An Initial Investigation for Detecting Partially Spoofed Audio
Figure 4 for An Initial Investigation for Detecting Partially Spoofed Audio
Viaarxiv icon

Attention Back-end for Automatic Speaker Verification with Multiple Enrollment Utterances

Add code
Bookmark button
Alert button
Apr 04, 2021
Chang Zeng, Xin Wang, Erica Cooper, Junichi Yamagishi

Figure 1 for Attention Back-end for Automatic Speaker Verification with Multiple Enrollment Utterances
Figure 2 for Attention Back-end for Automatic Speaker Verification with Multiple Enrollment Utterances
Figure 3 for Attention Back-end for Automatic Speaker Verification with Multiple Enrollment Utterances
Figure 4 for Attention Back-end for Automatic Speaker Verification with Multiple Enrollment Utterances
Viaarxiv icon

ASVspoof 2019: spoofing countermeasures for the detection of synthesized, converted and replayed speech

Add code
Bookmark button
Alert button
Feb 11, 2021
Andreas Nautsch, Xin Wang, Nicholas Evans, Tomi Kinnunen, Ville Vestman, Massimiliano Todisco, Héctor Delgado, Md Sahidullah, Junichi Yamagishi, Kong Aik Lee

Figure 1 for ASVspoof 2019: spoofing countermeasures for the detection of synthesized, converted and replayed speech
Figure 2 for ASVspoof 2019: spoofing countermeasures for the detection of synthesized, converted and replayed speech
Figure 3 for ASVspoof 2019: spoofing countermeasures for the detection of synthesized, converted and replayed speech
Figure 4 for ASVspoof 2019: spoofing countermeasures for the detection of synthesized, converted and replayed speech
Viaarxiv icon

Pretraining Strategies, Waveform Model Choice, and Acoustic Configurations for Multi-Speaker End-to-End Speech Synthesis

Add code
Bookmark button
Alert button
Nov 10, 2020
Erica Cooper, Xin Wang, Yi Zhao, Yusuke Yasuda, Junichi Yamagishi

Figure 1 for Pretraining Strategies, Waveform Model Choice, and Acoustic Configurations for Multi-Speaker End-to-End Speech Synthesis
Figure 2 for Pretraining Strategies, Waveform Model Choice, and Acoustic Configurations for Multi-Speaker End-to-End Speech Synthesis
Figure 3 for Pretraining Strategies, Waveform Model Choice, and Acoustic Configurations for Multi-Speaker End-to-End Speech Synthesis
Figure 4 for Pretraining Strategies, Waveform Model Choice, and Acoustic Configurations for Multi-Speaker End-to-End Speech Synthesis
Viaarxiv icon

Learning Disentangled Phone and Speaker Representations in a Semi-Supervised VQ-VAE Paradigm

Add code
Bookmark button
Alert button
Oct 21, 2020
Jennifer Williams, Yi Zhao, Erica Cooper, Junichi Yamagishi

Figure 1 for Learning Disentangled Phone and Speaker Representations in a Semi-Supervised VQ-VAE Paradigm
Figure 2 for Learning Disentangled Phone and Speaker Representations in a Semi-Supervised VQ-VAE Paradigm
Figure 3 for Learning Disentangled Phone and Speaker Representations in a Semi-Supervised VQ-VAE Paradigm
Figure 4 for Learning Disentangled Phone and Speaker Representations in a Semi-Supervised VQ-VAE Paradigm
Viaarxiv icon

Grapheme or phoneme? An Analysis of Tacotron's Embedded Representations

Add code
Bookmark button
Alert button
Oct 21, 2020
Antoine Perquin, Erica Cooper, Junichi Yamagishi

Figure 1 for Grapheme or phoneme? An Analysis of Tacotron's Embedded Representations
Figure 2 for Grapheme or phoneme? An Analysis of Tacotron's Embedded Representations
Figure 3 for Grapheme or phoneme? An Analysis of Tacotron's Embedded Representations
Figure 4 for Grapheme or phoneme? An Analysis of Tacotron's Embedded Representations
Viaarxiv icon

End-to-End Text-to-Speech using Latent Duration based on VQ-VAE

Add code
Bookmark button
Alert button
Oct 20, 2020
Yusuke Yasuda, Xin Wang, Junichi Yamagishi

Figure 1 for End-to-End Text-to-Speech using Latent Duration based on VQ-VAE
Figure 2 for End-to-End Text-to-Speech using Latent Duration based on VQ-VAE
Figure 3 for End-to-End Text-to-Speech using Latent Duration based on VQ-VAE
Figure 4 for End-to-End Text-to-Speech using Latent Duration based on VQ-VAE
Viaarxiv icon