Picture for Ngoc Thang Vu

Ngoc Thang Vu

IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task

Add code
Jun 30, 2021
Figure 1 for IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task
Figure 2 for IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task
Figure 3 for IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task
Figure 4 for IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task
Viaarxiv icon

AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages

Add code
Apr 18, 2021
Figure 1 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Figure 2 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Figure 3 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Figure 4 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Viaarxiv icon

Few-shot Learning for Slot Tagging with Attentive Relational Network

Add code
Mar 03, 2021
Figure 1 for Few-shot Learning for Slot Tagging with Attentive Relational Network
Figure 2 for Few-shot Learning for Slot Tagging with Attentive Relational Network
Figure 3 for Few-shot Learning for Slot Tagging with Attentive Relational Network
Figure 4 for Few-shot Learning for Slot Tagging with Attentive Relational Network
Viaarxiv icon

Investigations on Audiovisual Emotion Recognition in Noisy Conditions

Add code
Mar 02, 2021
Figure 1 for Investigations on Audiovisual Emotion Recognition in Noisy Conditions
Figure 2 for Investigations on Audiovisual Emotion Recognition in Noisy Conditions
Figure 3 for Investigations on Audiovisual Emotion Recognition in Noisy Conditions
Figure 4 for Investigations on Audiovisual Emotion Recognition in Noisy Conditions
Viaarxiv icon

Meta-Learning for improving rare word recognition in end-to-end ASR

Add code
Feb 25, 2021
Figure 1 for Meta-Learning for improving rare word recognition in end-to-end ASR
Figure 2 for Meta-Learning for improving rare word recognition in end-to-end ASR
Figure 3 for Meta-Learning for improving rare word recognition in end-to-end ASR
Figure 4 for Meta-Learning for improving rare word recognition in end-to-end ASR
Viaarxiv icon

Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning

Add code
Dec 04, 2020
Figure 1 for Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning
Figure 2 for Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning
Figure 3 for Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning
Figure 4 for Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning
Viaarxiv icon

Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension

Add code
Oct 27, 2020
Figure 1 for Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
Figure 2 for Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
Figure 3 for Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
Figure 4 for Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
Viaarxiv icon

F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering

Add code
Oct 13, 2020
Figure 1 for F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Figure 2 for F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Figure 3 for F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Figure 4 for F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Viaarxiv icon

Pretrained Semantic Speech Embeddings for End-to-End Spoken Language Understanding via Cross-Modal Teacher-Student Learning

Add code
Jul 03, 2020
Figure 1 for Pretrained Semantic Speech Embeddings for End-to-End Spoken Language Understanding via Cross-Modal Teacher-Student Learning
Figure 2 for Pretrained Semantic Speech Embeddings for End-to-End Spoken Language Understanding via Cross-Modal Teacher-Student Learning
Figure 3 for Pretrained Semantic Speech Embeddings for End-to-End Spoken Language Understanding via Cross-Modal Teacher-Student Learning
Figure 4 for Pretrained Semantic Speech Embeddings for End-to-End Spoken Language Understanding via Cross-Modal Teacher-Student Learning
Viaarxiv icon

ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational Agents

Add code
May 04, 2020
Figure 1 for ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational Agents
Figure 2 for ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational Agents
Viaarxiv icon