Alert button
Picture for Ionut-Teodor Sorodoc

Ionut-Teodor Sorodoc

Alert button

Class-Agnostic Continual Learning of Alternating Languages and Domains

Add code
Bookmark button
Alert button
Apr 07, 2020
Germán Kruszewski, Ionut-Teodor Sorodoc, Tomas Mikolov

Figure 1 for Class-Agnostic Continual Learning of Alternating Languages and Domains
Figure 2 for Class-Agnostic Continual Learning of Alternating Languages and Domains
Figure 3 for Class-Agnostic Continual Learning of Alternating Languages and Domains
Figure 4 for Class-Agnostic Continual Learning of Alternating Languages and Domains
Viaarxiv icon

Recurrent Instance Segmentation using Sequences of Referring Expressions

Add code
Bookmark button
Alert button
Nov 05, 2019
Alba Herrera-Palacio, Carles Ventura, Carina Silberer, Ionut-Teodor Sorodoc, Gemma Boleda, Xavier Giro-i-Nieto

Figure 1 for Recurrent Instance Segmentation using Sequences of Referring Expressions
Figure 2 for Recurrent Instance Segmentation using Sequences of Referring Expressions
Figure 3 for Recurrent Instance Segmentation using Sequences of Referring Expressions
Figure 4 for Recurrent Instance Segmentation using Sequences of Referring Expressions
Viaarxiv icon

What do Entity-Centric Models Learn? Insights from Entity Linking in Multi-Party Dialogue

Add code
Bookmark button
Alert button
May 16, 2019
Laura Aina, Carina Silberer, Matthijs Westera, Ionut-Teodor Sorodoc, Gemma Boleda

Figure 1 for What do Entity-Centric Models Learn? Insights from Entity Linking in Multi-Party Dialogue
Figure 2 for What do Entity-Centric Models Learn? Insights from Entity Linking in Multi-Party Dialogue
Figure 3 for What do Entity-Centric Models Learn? Insights from Entity Linking in Multi-Party Dialogue
Figure 4 for What do Entity-Centric Models Learn? Insights from Entity Linking in Multi-Party Dialogue
Viaarxiv icon

AMORE-UPF at SemEval-2018 Task 4: BiLSTM with Entity Library

Add code
Bookmark button
Alert button
May 14, 2018
Laura Aina, Carina Silberer, Ionut-Teodor Sorodoc, Matthijs Westera, Gemma Boleda

Figure 1 for AMORE-UPF at SemEval-2018 Task 4: BiLSTM with Entity Library
Figure 2 for AMORE-UPF at SemEval-2018 Task 4: BiLSTM with Entity Library
Figure 3 for AMORE-UPF at SemEval-2018 Task 4: BiLSTM with Entity Library
Figure 4 for AMORE-UPF at SemEval-2018 Task 4: BiLSTM with Entity Library
Viaarxiv icon

Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision

Add code
Bookmark button
Alert button
Apr 13, 2018
Sandro Pezzelle, Ionut-Teodor Sorodoc, Raffaella Bernardi

Figure 1 for Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision
Figure 2 for Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision
Figure 3 for Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision
Figure 4 for Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision
Viaarxiv icon