Alert button
Picture for Andros Tjandra

Andros Tjandra

Alert button

Multi-scale Alignment and Contextual History for Attention Mechanism in Sequence-to-sequence Model

Add code
Bookmark button
Alert button
Jul 22, 2018
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura

Figure 1 for Multi-scale Alignment and Contextual History for Attention Mechanism in Sequence-to-sequence Model
Figure 2 for Multi-scale Alignment and Contextual History for Attention Mechanism in Sequence-to-sequence Model
Figure 3 for Multi-scale Alignment and Contextual History for Attention Mechanism in Sequence-to-sequence Model
Figure 4 for Multi-scale Alignment and Contextual History for Attention Mechanism in Sequence-to-sequence Model
Viaarxiv icon

Tensor Decomposition for Compressing Recurrent Neural Network

Add code
Bookmark button
Alert button
May 08, 2018
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura

Figure 1 for Tensor Decomposition for Compressing Recurrent Neural Network
Figure 2 for Tensor Decomposition for Compressing Recurrent Neural Network
Figure 3 for Tensor Decomposition for Compressing Recurrent Neural Network
Figure 4 for Tensor Decomposition for Compressing Recurrent Neural Network
Viaarxiv icon

Machine Speech Chain with One-shot Speaker Adaptation

Add code
Bookmark button
Alert button
Mar 28, 2018
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura

Figure 1 for Machine Speech Chain with One-shot Speaker Adaptation
Figure 2 for Machine Speech Chain with One-shot Speaker Adaptation
Figure 3 for Machine Speech Chain with One-shot Speaker Adaptation
Figure 4 for Machine Speech Chain with One-shot Speaker Adaptation
Viaarxiv icon

Sequence-to-Sequence ASR Optimization via Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 28, 2018
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura

Figure 1 for Sequence-to-Sequence ASR Optimization via Reinforcement Learning
Figure 2 for Sequence-to-Sequence ASR Optimization via Reinforcement Learning
Figure 3 for Sequence-to-Sequence ASR Optimization via Reinforcement Learning
Viaarxiv icon

Local Monotonic Attention Mechanism for End-to-End Speech and Language Processing

Add code
Bookmark button
Alert button
Nov 03, 2017
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura

Figure 1 for Local Monotonic Attention Mechanism for End-to-End Speech and Language Processing
Figure 2 for Local Monotonic Attention Mechanism for End-to-End Speech and Language Processing
Figure 3 for Local Monotonic Attention Mechanism for End-to-End Speech and Language Processing
Figure 4 for Local Monotonic Attention Mechanism for End-to-End Speech and Language Processing
Viaarxiv icon

Attention-based Wav2Text with Feature Transfer Learning

Add code
Bookmark button
Alert button
Sep 22, 2017
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura

Figure 1 for Attention-based Wav2Text with Feature Transfer Learning
Figure 2 for Attention-based Wav2Text with Feature Transfer Learning
Figure 3 for Attention-based Wav2Text with Feature Transfer Learning
Figure 4 for Attention-based Wav2Text with Feature Transfer Learning
Viaarxiv icon

Listening while Speaking: Speech Chain by Deep Learning

Add code
Bookmark button
Alert button
Jul 16, 2017
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura

Figure 1 for Listening while Speaking: Speech Chain by Deep Learning
Figure 2 for Listening while Speaking: Speech Chain by Deep Learning
Figure 3 for Listening while Speaking: Speech Chain by Deep Learning
Figure 4 for Listening while Speaking: Speech Chain by Deep Learning
Viaarxiv icon

Gated Recurrent Neural Tensor Network

Add code
Bookmark button
Alert button
Jun 07, 2017
Andros Tjandra, Sakriani Sakti, Ruli Manurung, Mirna Adriani, Satoshi Nakamura

Figure 1 for Gated Recurrent Neural Tensor Network
Figure 2 for Gated Recurrent Neural Tensor Network
Figure 3 for Gated Recurrent Neural Tensor Network
Figure 4 for Gated Recurrent Neural Tensor Network
Viaarxiv icon

Compressing Recurrent Neural Network with Tensor Train

Add code
Bookmark button
Alert button
May 23, 2017
Andros Tjandra, Sakriani Sakti, Satoshi Nakamura

Figure 1 for Compressing Recurrent Neural Network with Tensor Train
Figure 2 for Compressing Recurrent Neural Network with Tensor Train
Figure 3 for Compressing Recurrent Neural Network with Tensor Train
Figure 4 for Compressing Recurrent Neural Network with Tensor Train
Viaarxiv icon