Picture for Jakob Uszkoreit

Jakob Uszkoreit

Insertion Transformer: Flexible Sequence Generation via Insertion Operations

Add code
Feb 08, 2019
Figure 1 for Insertion Transformer: Flexible Sequence Generation via Insertion Operations
Figure 2 for Insertion Transformer: Flexible Sequence Generation via Insertion Operations
Figure 3 for Insertion Transformer: Flexible Sequence Generation via Insertion Operations
Figure 4 for Insertion Transformer: Flexible Sequence Generation via Insertion Operations
Viaarxiv icon

Blockwise Parallel Decoding for Deep Autoregressive Models

Add code
Nov 07, 2018
Figure 1 for Blockwise Parallel Decoding for Deep Autoregressive Models
Figure 2 for Blockwise Parallel Decoding for Deep Autoregressive Models
Figure 3 for Blockwise Parallel Decoding for Deep Autoregressive Models
Figure 4 for Blockwise Parallel Decoding for Deep Autoregressive Models
Viaarxiv icon

Music Transformer

Add code
Oct 10, 2018
Figure 1 for Music Transformer
Figure 2 for Music Transformer
Figure 3 for Music Transformer
Figure 4 for Music Transformer
Viaarxiv icon

Universal Transformers

Add code
Jul 10, 2018
Figure 1 for Universal Transformers
Figure 2 for Universal Transformers
Figure 3 for Universal Transformers
Figure 4 for Universal Transformers
Viaarxiv icon

Image Transformer

Add code
Jun 15, 2018
Figure 1 for Image Transformer
Figure 2 for Image Transformer
Figure 3 for Image Transformer
Figure 4 for Image Transformer
Viaarxiv icon

Fast Decoding in Sequence Models using Discrete Latent Variables

Add code
Jun 07, 2018
Figure 1 for Fast Decoding in Sequence Models using Discrete Latent Variables
Figure 2 for Fast Decoding in Sequence Models using Discrete Latent Variables
Figure 3 for Fast Decoding in Sequence Models using Discrete Latent Variables
Figure 4 for Fast Decoding in Sequence Models using Discrete Latent Variables
Viaarxiv icon

Self-Attention with Relative Position Representations

Add code
Apr 12, 2018
Figure 1 for Self-Attention with Relative Position Representations
Figure 2 for Self-Attention with Relative Position Representations
Figure 3 for Self-Attention with Relative Position Representations
Figure 4 for Self-Attention with Relative Position Representations
Viaarxiv icon

Tensor2Tensor for Neural Machine Translation

Add code
Mar 16, 2018
Figure 1 for Tensor2Tensor for Neural Machine Translation
Viaarxiv icon

Attention Is All You Need

Add code
Dec 06, 2017
Figure 1 for Attention Is All You Need
Figure 2 for Attention Is All You Need
Figure 3 for Attention Is All You Need
Figure 4 for Attention Is All You Need
Viaarxiv icon

One Model To Learn Them All

Add code
Jun 16, 2017
Figure 1 for One Model To Learn Them All
Figure 2 for One Model To Learn Them All
Figure 3 for One Model To Learn Them All
Figure 4 for One Model To Learn Them All
Viaarxiv icon