Alert button
Picture for Andros Tjandra

Andros Tjandra

Alert button

Unsupervised Learning of Disentangled Speech Content and Style Representation

Add code
Bookmark button
Alert button
Oct 24, 2020
Andros Tjandra, Ruoming Pang, Yu Zhang, Shigeki Karita

Figure 1 for Unsupervised Learning of Disentangled Speech Content and Style Representation
Figure 2 for Unsupervised Learning of Disentangled Speech Content and Style Representation
Figure 3 for Unsupervised Learning of Disentangled Speech Content and Style Representation
Figure 4 for Unsupervised Learning of Disentangled Speech Content and Style Representation
Viaarxiv icon

Transformer VQ-VAE for Unsupervised Unit Discovery and Speech Synthesis: ZeroSpeech 2020 Challenge

Add code
Bookmark button
Alert button
May 24, 2020
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura

Figure 1 for Transformer VQ-VAE for Unsupervised Unit Discovery and Speech Synthesis: ZeroSpeech 2020 Challenge
Figure 2 for Transformer VQ-VAE for Unsupervised Unit Discovery and Speech Synthesis: ZeroSpeech 2020 Challenge
Figure 3 for Transformer VQ-VAE for Unsupervised Unit Discovery and Speech Synthesis: ZeroSpeech 2020 Challenge
Figure 4 for Transformer VQ-VAE for Unsupervised Unit Discovery and Speech Synthesis: ZeroSpeech 2020 Challenge
Viaarxiv icon

Deja-vu: Double Feature Presentation in Deep Transformer Networks

Add code
Bookmark button
Alert button
Oct 23, 2019
Andros Tjandra, Chunxi Liu, Frank Zhang, Xiaohui Zhang, Yongqiang Wang, Gabriel Synnaeve, Satoshi Nakamura, Geoffrey Zweig

Figure 1 for Deja-vu: Double Feature Presentation in Deep Transformer Networks
Figure 2 for Deja-vu: Double Feature Presentation in Deep Transformer Networks
Figure 3 for Deja-vu: Double Feature Presentation in Deep Transformer Networks
Figure 4 for Deja-vu: Double Feature Presentation in Deep Transformer Networks
Viaarxiv icon

Transformer-based Acoustic Modeling for Hybrid Speech Recognition

Add code
Bookmark button
Alert button
Oct 22, 2019
Yongqiang Wang, Abdelrahman Mohamed, Duc Le, Chunxi Liu, Alex Xiao, Jay Mahadeokar, Hongzhao Huang, Andros Tjandra, Xiaohui Zhang, Frank Zhang, Christian Fuegen, Geoffrey Zweig, Michael L. Seltzer

Figure 1 for Transformer-based Acoustic Modeling for Hybrid Speech Recognition
Figure 2 for Transformer-based Acoustic Modeling for Hybrid Speech Recognition
Figure 3 for Transformer-based Acoustic Modeling for Hybrid Speech Recognition
Figure 4 for Transformer-based Acoustic Modeling for Hybrid Speech Recognition
Viaarxiv icon

Speech-to-speech Translation between Untranscribed Unknown Languages

Add code
Bookmark button
Alert button
Oct 05, 2019
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura

Figure 1 for Speech-to-speech Translation between Untranscribed Unknown Languages
Figure 2 for Speech-to-speech Translation between Untranscribed Unknown Languages
Figure 3 for Speech-to-speech Translation between Untranscribed Unknown Languages
Figure 4 for Speech-to-speech Translation between Untranscribed Unknown Languages
Viaarxiv icon

From Speech Chain to Multimodal Chain: Leveraging Cross-modal Data Augmentation for Semi-supervised Learning

Add code
Bookmark button
Alert button
Jun 03, 2019
Johanes Effendi, Andros Tjandra, Sakriani Sakti, Satoshi Nakamura

Figure 1 for From Speech Chain to Multimodal Chain: Leveraging Cross-modal Data Augmentation for Semi-supervised Learning
Figure 2 for From Speech Chain to Multimodal Chain: Leveraging Cross-modal Data Augmentation for Semi-supervised Learning
Figure 3 for From Speech Chain to Multimodal Chain: Leveraging Cross-modal Data Augmentation for Semi-supervised Learning
Figure 4 for From Speech Chain to Multimodal Chain: Leveraging Cross-modal Data Augmentation for Semi-supervised Learning
Viaarxiv icon

VQVAE Unsupervised Unit Discovery and Multi-scale Code2Spec Inverter for Zerospeech Challenge 2019

Add code
Bookmark button
Alert button
May 29, 2019
Andros Tjandra, Berrak Sisman, Mingyang Zhang, Sakriani Sakti, Haizhou Li, Satoshi Nakamura

Figure 1 for VQVAE Unsupervised Unit Discovery and Multi-scale Code2Spec Inverter for Zerospeech Challenge 2019
Figure 2 for VQVAE Unsupervised Unit Discovery and Multi-scale Code2Spec Inverter for Zerospeech Challenge 2019
Figure 3 for VQVAE Unsupervised Unit Discovery and Multi-scale Code2Spec Inverter for Zerospeech Challenge 2019
Figure 4 for VQVAE Unsupervised Unit Discovery and Multi-scale Code2Spec Inverter for Zerospeech Challenge 2019
Viaarxiv icon

End-to-End Feedback Loss in Speech Chain Framework via Straight-Through Estimator

Add code
Bookmark button
Alert button
Oct 31, 2018
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura

Figure 1 for End-to-End Feedback Loss in Speech Chain Framework via Straight-Through Estimator
Figure 2 for End-to-End Feedback Loss in Speech Chain Framework via Straight-Through Estimator
Figure 3 for End-to-End Feedback Loss in Speech Chain Framework via Straight-Through Estimator
Figure 4 for End-to-End Feedback Loss in Speech Chain Framework via Straight-Through Estimator
Viaarxiv icon