Alert button
Picture for Ulme Wennberg

Ulme Wennberg

Alert button

Wavebender GAN: An architecture for phonetically meaningful speech manipulation

Add code
Bookmark button
Alert button
Feb 22, 2022
Gustavo Teodoro Döhler Beck, Ulme Wennberg, Zofia Malisz, Gustav Eje Henter

Figure 1 for Wavebender GAN: An architecture for phonetically meaningful speech manipulation
Figure 2 for Wavebender GAN: An architecture for phonetically meaningful speech manipulation
Figure 3 for Wavebender GAN: An architecture for phonetically meaningful speech manipulation
Figure 4 for Wavebender GAN: An architecture for phonetically meaningful speech manipulation
Viaarxiv icon

The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models

Add code
Bookmark button
Alert button
Jun 03, 2021
Ulme Wennberg, Gustav Eje Henter

Figure 1 for The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
Figure 2 for The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
Figure 3 for The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
Figure 4 for The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
Viaarxiv icon

Entity, Relation, and Event Extraction with Contextualized Span Representations

Add code
Bookmark button
Alert button
Sep 10, 2019
David Wadden, Ulme Wennberg, Yi Luan, Hannaneh Hajishirzi

Figure 1 for Entity, Relation, and Event Extraction with Contextualized Span Representations
Figure 2 for Entity, Relation, and Event Extraction with Contextualized Span Representations
Figure 3 for Entity, Relation, and Event Extraction with Contextualized Span Representations
Figure 4 for Entity, Relation, and Event Extraction with Contextualized Span Representations
Viaarxiv icon