Alert button
Picture for Ngoc Thang Vu

Ngoc Thang Vu

Alert button

IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task

Add code
Bookmark button
Alert button
Jun 30, 2021
Pavel Denisov, Manuel Mager, Ngoc Thang Vu

Figure 1 for IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task
Figure 2 for IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task
Figure 3 for IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task
Figure 4 for IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task
Viaarxiv icon

AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages

Add code
Bookmark button
Alert button
Apr 18, 2021
Abteen Ebrahimi, Manuel Mager, Arturo Oncevay, Vishrav Chaudhary, Luis Chiruzzo, Angela Fan, John Ortega, Ricardo Ramos, Annette Rios, Ivan Vladimir, Gustavo A. Giménez-Lugo, Elisabeth Mager, Graham Neubig, Alexis Palmer, Rolando A. Coto Solano, Ngoc Thang Vu, Katharina Kann

Figure 1 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Figure 2 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Figure 3 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Figure 4 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Viaarxiv icon

Few-shot Learning for Slot Tagging with Attentive Relational Network

Add code
Bookmark button
Alert button
Mar 03, 2021
Cennet Oguz, Ngoc Thang Vu

Figure 1 for Few-shot Learning for Slot Tagging with Attentive Relational Network
Figure 2 for Few-shot Learning for Slot Tagging with Attentive Relational Network
Figure 3 for Few-shot Learning for Slot Tagging with Attentive Relational Network
Figure 4 for Few-shot Learning for Slot Tagging with Attentive Relational Network
Viaarxiv icon

Investigations on Audiovisual Emotion Recognition in Noisy Conditions

Add code
Bookmark button
Alert button
Mar 02, 2021
Michael Neumann, Ngoc Thang Vu

Figure 1 for Investigations on Audiovisual Emotion Recognition in Noisy Conditions
Figure 2 for Investigations on Audiovisual Emotion Recognition in Noisy Conditions
Figure 3 for Investigations on Audiovisual Emotion Recognition in Noisy Conditions
Figure 4 for Investigations on Audiovisual Emotion Recognition in Noisy Conditions
Viaarxiv icon

Meta-Learning for improving rare word recognition in end-to-end ASR

Add code
Bookmark button
Alert button
Feb 25, 2021
Florian Lux, Ngoc Thang Vu

Figure 1 for Meta-Learning for improving rare word recognition in end-to-end ASR
Figure 2 for Meta-Learning for improving rare word recognition in end-to-end ASR
Figure 3 for Meta-Learning for improving rare word recognition in end-to-end ASR
Figure 4 for Meta-Learning for improving rare word recognition in end-to-end ASR
Viaarxiv icon

Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning

Add code
Bookmark button
Alert button
Dec 04, 2020
Daniel Grießhaber, Johannes Maucher, Ngoc Thang Vu

Figure 1 for Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning
Figure 2 for Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning
Figure 3 for Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning
Figure 4 for Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning
Viaarxiv icon

Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension

Add code
Bookmark button
Alert button
Oct 27, 2020
Ekta Sood, Simon Tannert, Diego Frassinelli, Andreas Bulling, Ngoc Thang Vu

Figure 1 for Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
Figure 2 for Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
Figure 3 for Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
Figure 4 for Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
Viaarxiv icon

F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering

Add code
Bookmark button
Alert button
Oct 13, 2020
Hendrik Schuff, Heike Adel, Ngoc Thang Vu

Figure 1 for F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Figure 2 for F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Figure 3 for F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Figure 4 for F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Viaarxiv icon

Pretrained Semantic Speech Embeddings for End-to-End Spoken Language Understanding via Cross-Modal Teacher-Student Learning

Add code
Bookmark button
Alert button
Jul 03, 2020
Pavel Denisov, Ngoc Thang Vu

Figure 1 for Pretrained Semantic Speech Embeddings for End-to-End Spoken Language Understanding via Cross-Modal Teacher-Student Learning
Figure 2 for Pretrained Semantic Speech Embeddings for End-to-End Spoken Language Understanding via Cross-Modal Teacher-Student Learning
Figure 3 for Pretrained Semantic Speech Embeddings for End-to-End Spoken Language Understanding via Cross-Modal Teacher-Student Learning
Figure 4 for Pretrained Semantic Speech Embeddings for End-to-End Spoken Language Understanding via Cross-Modal Teacher-Student Learning
Viaarxiv icon