Alert button
Picture for Minh-Thang Luong

Minh-Thang Luong

Alert button

Findings of the Second Workshop on Neural Machine Translation and Generation

Add code
Bookmark button
Alert button
Jun 18, 2018
Alexandra Birch, Andrew Finch, Minh-Thang Luong, Graham Neubig, Yusuke Oda

Figure 1 for Findings of the Second Workshop on Neural Machine Translation and Generation
Figure 2 for Findings of the Second Workshop on Neural Machine Translation and Generation
Figure 3 for Findings of the Second Workshop on Neural Machine Translation and Generation
Figure 4 for Findings of the Second Workshop on Neural Machine Translation and Generation
Viaarxiv icon

Learning Longer-term Dependencies in RNNs with Auxiliary Losses

Add code
Bookmark button
Alert button
Jun 13, 2018
Trieu H. Trinh, Andrew M. Dai, Minh-Thang Luong, Quoc V. Le

Figure 1 for Learning Longer-term Dependencies in RNNs with Auxiliary Losses
Figure 2 for Learning Longer-term Dependencies in RNNs with Auxiliary Losses
Figure 3 for Learning Longer-term Dependencies in RNNs with Auxiliary Losses
Figure 4 for Learning Longer-term Dependencies in RNNs with Auxiliary Losses
Viaarxiv icon

QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension

Add code
Bookmark button
Alert button
Apr 23, 2018
Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, Quoc V. Le

Figure 1 for QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
Figure 2 for QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
Figure 3 for QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
Figure 4 for QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
Viaarxiv icon

On the Effective Use of Pretraining for Natural Language Inference

Add code
Bookmark button
Alert button
Oct 05, 2017
Ignacio Cases, Minh-Thang Luong, Christopher Potts

Figure 1 for On the Effective Use of Pretraining for Natural Language Inference
Figure 2 for On the Effective Use of Pretraining for Natural Language Inference
Figure 3 for On the Effective Use of Pretraining for Natural Language Inference
Figure 4 for On the Effective Use of Pretraining for Natural Language Inference
Viaarxiv icon

Efficient Attention using a Fixed-Size Memory Representation

Add code
Bookmark button
Alert button
Jul 01, 2017
Denny Britz, Melody Y. Guan, Minh-Thang Luong

Figure 1 for Efficient Attention using a Fixed-Size Memory Representation
Figure 2 for Efficient Attention using a Fixed-Size Memory Representation
Figure 3 for Efficient Attention using a Fixed-Size Memory Representation
Figure 4 for Efficient Attention using a Fixed-Size Memory Representation
Viaarxiv icon

Online and Linear-Time Attention by Enforcing Monotonic Alignments

Add code
Bookmark button
Alert button
Jun 29, 2017
Colin Raffel, Minh-Thang Luong, Peter J. Liu, Ron J. Weiss, Douglas Eck

Figure 1 for Online and Linear-Time Attention by Enforcing Monotonic Alignments
Figure 2 for Online and Linear-Time Attention by Enforcing Monotonic Alignments
Figure 3 for Online and Linear-Time Attention by Enforcing Monotonic Alignments
Figure 4 for Online and Linear-Time Attention by Enforcing Monotonic Alignments
Viaarxiv icon

Massive Exploration of Neural Machine Translation Architectures

Add code
Bookmark button
Alert button
Mar 21, 2017
Denny Britz, Anna Goldie, Minh-Thang Luong, Quoc Le

Figure 1 for Massive Exploration of Neural Machine Translation Architectures
Figure 2 for Massive Exploration of Neural Machine Translation Architectures
Figure 3 for Massive Exploration of Neural Machine Translation Architectures
Figure 4 for Massive Exploration of Neural Machine Translation Architectures
Viaarxiv icon

Compression of Neural Machine Translation Models via Pruning

Add code
Bookmark button
Alert button
Jun 29, 2016
Abigail See, Minh-Thang Luong, Christopher D. Manning

Viaarxiv icon

Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models

Add code
Bookmark button
Alert button
Jun 23, 2016
Minh-Thang Luong, Christopher D. Manning

Figure 1 for Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models
Figure 2 for Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models
Figure 3 for Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models
Figure 4 for Achieving Open Vocabulary Neural Machine Translation with Hybrid Word-Character Models
Viaarxiv icon

Multi-task Sequence to Sequence Learning

Add code
Bookmark button
Alert button
Mar 01, 2016
Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, Lukasz Kaiser

Figure 1 for Multi-task Sequence to Sequence Learning
Figure 2 for Multi-task Sequence to Sequence Learning
Figure 3 for Multi-task Sequence to Sequence Learning
Figure 4 for Multi-task Sequence to Sequence Learning
Viaarxiv icon