Alert button
Picture for Jan Rosendahl

Jan Rosendahl

Alert button

Efficient Sequence Training of Attention Models using Approximative Recombination

Add code
Bookmark button
Alert button
Oct 18, 2021
Nils-Philipp Wynands, Wilfried Michel, Jan Rosendahl, Ralf Schlüter, Hermann Ney

Figure 1 for Efficient Sequence Training of Attention Models using Approximative Recombination
Figure 2 for Efficient Sequence Training of Attention Models using Approximative Recombination
Figure 3 for Efficient Sequence Training of Attention Models using Approximative Recombination
Figure 4 for Efficient Sequence Training of Attention Models using Approximative Recombination
Viaarxiv icon

Towards Reinforcement Learning for Pivot-based Neural Machine Translation with Non-autoregressive Transformer

Add code
Bookmark button
Alert button
Sep 27, 2021
Evgeniia Tokarchuk, Jan Rosendahl, Weiyue Wang, Pavel Petrushkov, Tomer Lancewicki, Shahram Khadivi, Hermann Ney

Figure 1 for Towards Reinforcement Learning for Pivot-based Neural Machine Translation with Non-autoregressive Transformer
Figure 2 for Towards Reinforcement Learning for Pivot-based Neural Machine Translation with Non-autoregressive Transformer
Figure 3 for Towards Reinforcement Learning for Pivot-based Neural Machine Translation with Non-autoregressive Transformer
Figure 4 for Towards Reinforcement Learning for Pivot-based Neural Machine Translation with Non-autoregressive Transformer
Viaarxiv icon

Integrated Training for Sequence-to-Sequence Models Using Non-Autoregressive Transformer

Add code
Bookmark button
Alert button
Sep 27, 2021
Evgeniia Tokarchuk, Jan Rosendahl, Weiyue Wang, Pavel Petrushkov, Tomer Lancewicki, Shahram Khadivi, Hermann Ney

Figure 1 for Integrated Training for Sequence-to-Sequence Models Using Non-Autoregressive Transformer
Figure 2 for Integrated Training for Sequence-to-Sequence Models Using Non-Autoregressive Transformer
Figure 3 for Integrated Training for Sequence-to-Sequence Models Using Non-Autoregressive Transformer
Figure 4 for Integrated Training for Sequence-to-Sequence Models Using Non-Autoregressive Transformer
Viaarxiv icon

Learning Bilingual Sentence Embeddings via Autoencoding and Computing Similarities with a Multilayer Perceptron

Add code
Bookmark button
Alert button
Jun 05, 2019
Yunsu Kim, Hendrik Rosendahl, Nick Rossenbach, Jan Rosendahl, Shahram Khadivi, Hermann Ney

Figure 1 for Learning Bilingual Sentence Embeddings via Autoencoding and Computing Similarities with a Multilayer Perceptron
Figure 2 for Learning Bilingual Sentence Embeddings via Autoencoding and Computing Similarities with a Multilayer Perceptron
Figure 3 for Learning Bilingual Sentence Embeddings via Autoencoding and Computing Similarities with a Multilayer Perceptron
Figure 4 for Learning Bilingual Sentence Embeddings via Autoencoding and Computing Similarities with a Multilayer Perceptron
Viaarxiv icon