Picture for Minh-Thang Luong

Minh-Thang Luong

Efficient Attention using a Fixed-Size Memory Representation

Add code
Jul 01, 2017
Figure 1 for Efficient Attention using a Fixed-Size Memory Representation
Figure 2 for Efficient Attention using a Fixed-Size Memory Representation
Figure 3 for Efficient Attention using a Fixed-Size Memory Representation
Figure 4 for Efficient Attention using a Fixed-Size Memory Representation
Viaarxiv icon

Online and Linear-Time Attention by Enforcing Monotonic Alignments

Add code
Jun 29, 2017
Figure 1 for Online and Linear-Time Attention by Enforcing Monotonic Alignments
Figure 2 for Online and Linear-Time Attention by Enforcing Monotonic Alignments
Figure 3 for Online and Linear-Time Attention by Enforcing Monotonic Alignments
Figure 4 for Online and Linear-Time Attention by Enforcing Monotonic Alignments
Viaarxiv icon

Massive Exploration of Neural Machine Translation Architectures

Add code
Mar 21, 2017
Figure 1 for Massive Exploration of Neural Machine Translation Architectures
Figure 2 for Massive Exploration of Neural Machine Translation Architectures
Figure 3 for Massive Exploration of Neural Machine Translation Architectures
Figure 4 for Massive Exploration of Neural Machine Translation Architectures
Viaarxiv icon

Compression of Neural Machine Translation Models via Pruning

Add code
Jun 29, 2016
Viaarxiv icon

Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models

Add code
Jun 23, 2016
Figure 1 for Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models
Figure 2 for Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models
Figure 3 for Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models
Figure 4 for Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models
Viaarxiv icon

Multi-task Sequence to Sequence Learning

Add code
Mar 01, 2016
Figure 1 for Multi-task Sequence to Sequence Learning
Figure 2 for Multi-task Sequence to Sequence Learning
Figure 3 for Multi-task Sequence to Sequence Learning
Figure 4 for Multi-task Sequence to Sequence Learning
Viaarxiv icon

Effective Approaches to Attention-based Neural Machine Translation

Add code
Sep 20, 2015
Figure 1 for Effective Approaches to Attention-based Neural Machine Translation
Figure 2 for Effective Approaches to Attention-based Neural Machine Translation
Figure 3 for Effective Approaches to Attention-based Neural Machine Translation
Figure 4 for Effective Approaches to Attention-based Neural Machine Translation
Viaarxiv icon

When Are Tree Structures Necessary for Deep Learning of Representations?

Add code
Aug 18, 2015
Figure 1 for When Are Tree Structures Necessary for Deep Learning of Representations?
Figure 2 for When Are Tree Structures Necessary for Deep Learning of Representations?
Figure 3 for When Are Tree Structures Necessary for Deep Learning of Representations?
Figure 4 for When Are Tree Structures Necessary for Deep Learning of Representations?
Viaarxiv icon

A Hierarchical Neural Autoencoder for Paragraphs and Documents

Add code
Jun 06, 2015
Figure 1 for A Hierarchical Neural Autoencoder for Paragraphs and Documents
Figure 2 for A Hierarchical Neural Autoencoder for Paragraphs and Documents
Figure 3 for A Hierarchical Neural Autoencoder for Paragraphs and Documents
Figure 4 for A Hierarchical Neural Autoencoder for Paragraphs and Documents
Viaarxiv icon

Addressing the Rare Word Problem in Neural Machine Translation

Add code
May 30, 2015
Figure 1 for Addressing the Rare Word Problem in Neural Machine Translation
Figure 2 for Addressing the Rare Word Problem in Neural Machine Translation
Figure 3 for Addressing the Rare Word Problem in Neural Machine Translation
Figure 4 for Addressing the Rare Word Problem in Neural Machine Translation
Viaarxiv icon