Picture for Rico Sennrich

Rico Sennrich

Distributionally Robust Recurrent Decoders with Random Network Distillation

Add code
Oct 25, 2021
Figure 1 for Distributionally Robust Recurrent Decoders with Random Network Distillation
Figure 2 for Distributionally Robust Recurrent Decoders with Random Network Distillation
Figure 3 for Distributionally Robust Recurrent Decoders with Random Network Distillation
Figure 4 for Distributionally Robust Recurrent Decoders with Random Network Distillation
Viaarxiv icon

On the Limits of Minimal Pairs in Contrastive Evaluation

Add code
Sep 15, 2021
Figure 1 for On the Limits of Minimal Pairs in Contrastive Evaluation
Figure 2 for On the Limits of Minimal Pairs in Contrastive Evaluation
Figure 3 for On the Limits of Minimal Pairs in Contrastive Evaluation
Figure 4 for On the Limits of Minimal Pairs in Contrastive Evaluation
Viaarxiv icon

Improving Zero-shot Cross-lingual Transfer between Closely Related Languages by injecting Character-level Noise

Add code
Sep 14, 2021
Figure 1 for Improving Zero-shot Cross-lingual Transfer between Closely Related Languages by injecting Character-level Noise
Figure 2 for Improving Zero-shot Cross-lingual Transfer between Closely Related Languages by injecting Character-level Noise
Figure 3 for Improving Zero-shot Cross-lingual Transfer between Closely Related Languages by injecting Character-level Noise
Figure 4 for Improving Zero-shot Cross-lingual Transfer between Closely Related Languages by injecting Character-level Noise
Viaarxiv icon

Vision Matters When It Should: Sanity Checking Multimodal Machine Translation Models

Add code
Sep 08, 2021
Figure 1 for Vision Matters When It Should: Sanity Checking Multimodal Machine Translation Models
Figure 2 for Vision Matters When It Should: Sanity Checking Multimodal Machine Translation Models
Viaarxiv icon

Language Modeling, Lexical Translation, Reordering: The Training Process of NMT through the Lens of Classical SMT

Add code
Sep 03, 2021
Figure 1 for Language Modeling, Lexical Translation, Reordering: The Training Process of NMT through the Lens of Classical SMT
Figure 2 for Language Modeling, Lexical Translation, Reordering: The Training Process of NMT through the Lens of Classical SMT
Figure 3 for Language Modeling, Lexical Translation, Reordering: The Training Process of NMT through the Lens of Classical SMT
Figure 4 for Language Modeling, Lexical Translation, Reordering: The Training Process of NMT through the Lens of Classical SMT
Viaarxiv icon

How Suitable Are Subword Segmentation Strategies for Translating Non-Concatenative Morphology?

Add code
Sep 02, 2021
Figure 1 for How Suitable Are Subword Segmentation Strategies for Translating Non-Concatenative Morphology?
Figure 2 for How Suitable Are Subword Segmentation Strategies for Translating Non-Concatenative Morphology?
Figure 3 for How Suitable Are Subword Segmentation Strategies for Translating Non-Concatenative Morphology?
Figure 4 for How Suitable Are Subword Segmentation Strategies for Translating Non-Concatenative Morphology?
Viaarxiv icon

Revisiting Negation in Neural Machine Translation

Add code
Jul 26, 2021
Figure 1 for Revisiting Negation in Neural Machine Translation
Figure 2 for Revisiting Negation in Neural Machine Translation
Figure 3 for Revisiting Negation in Neural Machine Translation
Figure 4 for Revisiting Negation in Neural Machine Translation
Viaarxiv icon

Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation

Add code
May 18, 2021
Figure 1 for Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation
Figure 2 for Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation
Figure 3 for Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation
Figure 4 for Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation
Viaarxiv icon

Sparse Attention with Linear Units

Add code
Apr 14, 2021
Figure 1 for Sparse Attention with Linear Units
Figure 2 for Sparse Attention with Linear Units
Figure 3 for Sparse Attention with Linear Units
Figure 4 for Sparse Attention with Linear Units
Viaarxiv icon

On Biasing Transformer Attention Towards Monotonicity

Add code
Apr 08, 2021
Figure 1 for On Biasing Transformer Attention Towards Monotonicity
Figure 2 for On Biasing Transformer Attention Towards Monotonicity
Figure 3 for On Biasing Transformer Attention Towards Monotonicity
Figure 4 for On Biasing Transformer Attention Towards Monotonicity
Viaarxiv icon