Picture for Gustav Eje Henter

Gustav Eje Henter

The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models

Add code
Jun 03, 2021
Figure 1 for The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
Figure 2 for The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
Figure 3 for The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
Figure 4 for The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
Viaarxiv icon

Robust Classification using Hidden Markov Models and Mixtures of Normalizing Flows

Add code
Feb 15, 2021
Figure 1 for Robust Classification using Hidden Markov Models and Mixtures of Normalizing Flows
Figure 2 for Robust Classification using Hidden Markov Models and Mixtures of Normalizing Flows
Figure 3 for Robust Classification using Hidden Markov Models and Mixtures of Normalizing Flows
Viaarxiv icon

Generating coherent spontaneous speech and gesture from text

Add code
Jan 14, 2021
Figure 1 for Generating coherent spontaneous speech and gesture from text
Figure 2 for Generating coherent spontaneous speech and gesture from text
Viaarxiv icon

Full-Glow: Fully conditional Glow for more realistic image generation

Add code
Dec 10, 2020
Figure 1 for Full-Glow: Fully conditional Glow for more realistic image generation
Figure 2 for Full-Glow: Fully conditional Glow for more realistic image generation
Figure 3 for Full-Glow: Fully conditional Glow for more realistic image generation
Figure 4 for Full-Glow: Fully conditional Glow for more realistic image generation
Viaarxiv icon

Moving fast and slow: Analysis of representations and post-processing in speech-driven automatic gesture generation

Add code
Jul 16, 2020
Figure 1 for Moving fast and slow: Analysis of representations and post-processing in speech-driven automatic gesture generation
Figure 2 for Moving fast and slow: Analysis of representations and post-processing in speech-driven automatic gesture generation
Figure 3 for Moving fast and slow: Analysis of representations and post-processing in speech-driven automatic gesture generation
Figure 4 for Moving fast and slow: Analysis of representations and post-processing in speech-driven automatic gesture generation
Viaarxiv icon

Robust model training and generalisation with Studentising flows

Add code
Jul 11, 2020
Figure 1 for Robust model training and generalisation with Studentising flows
Figure 2 for Robust model training and generalisation with Studentising flows
Figure 3 for Robust model training and generalisation with Studentising flows
Figure 4 for Robust model training and generalisation with Studentising flows
Viaarxiv icon

Let's face it: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings

Add code
Jun 11, 2020
Figure 1 for Let's face it: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings
Figure 2 for Let's face it: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings
Figure 3 for Let's face it: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings
Figure 4 for Let's face it: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings
Viaarxiv icon

Gesticulator: A framework for semantically-aware speech-driven gesture generation

Add code
Jan 25, 2020
Figure 1 for Gesticulator: A framework for semantically-aware speech-driven gesture generation
Figure 2 for Gesticulator: A framework for semantically-aware speech-driven gesture generation
Figure 3 for Gesticulator: A framework for semantically-aware speech-driven gesture generation
Figure 4 for Gesticulator: A framework for semantically-aware speech-driven gesture generation
Viaarxiv icon

MoGlow: Probabilistic and controllable motion synthesis using normalising flows

Add code
May 16, 2019
Figure 1 for MoGlow: Probabilistic and controllable motion synthesis using normalising flows
Figure 2 for MoGlow: Probabilistic and controllable motion synthesis using normalising flows
Figure 3 for MoGlow: Probabilistic and controllable motion synthesis using normalising flows
Figure 4 for MoGlow: Probabilistic and controllable motion synthesis using normalising flows
Viaarxiv icon

Deep Encoder-Decoder Models for Unsupervised Learning of Controllable Speech Synthesis

Add code
Sep 09, 2018
Figure 1 for Deep Encoder-Decoder Models for Unsupervised Learning of Controllable Speech Synthesis
Figure 2 for Deep Encoder-Decoder Models for Unsupervised Learning of Controllable Speech Synthesis
Figure 3 for Deep Encoder-Decoder Models for Unsupervised Learning of Controllable Speech Synthesis
Figure 4 for Deep Encoder-Decoder Models for Unsupervised Learning of Controllable Speech Synthesis
Viaarxiv icon