Picture for Ngoc Thang Vu

Ngoc Thang Vu

Investigations on Speech Recognition Systems for Low-Resource Dialectal Arabic-English Code-Switching Speech

Add code
Aug 29, 2021
Figure 1 for Investigations on Speech Recognition Systems for Low-Resource Dialectal Arabic-English Code-Switching Speech
Figure 2 for Investigations on Speech Recognition Systems for Low-Resource Dialectal Arabic-English Code-Switching Speech
Figure 3 for Investigations on Speech Recognition Systems for Low-Resource Dialectal Arabic-English Code-Switching Speech
Figure 4 for Investigations on Speech Recognition Systems for Low-Resource Dialectal Arabic-English Code-Switching Speech
Viaarxiv icon

Thought Flow Nets: From Single Predictions to Trains of Model Thought

Add code
Jul 26, 2021
Figure 1 for Thought Flow Nets: From Single Predictions to Trains of Model Thought
Figure 2 for Thought Flow Nets: From Single Predictions to Trains of Model Thought
Figure 3 for Thought Flow Nets: From Single Predictions to Trains of Model Thought
Figure 4 for Thought Flow Nets: From Single Predictions to Trains of Model Thought
Viaarxiv icon

IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task

Add code
Jun 30, 2021
Figure 1 for IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task
Figure 2 for IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task
Figure 3 for IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task
Figure 4 for IMS' Systems for the IWSLT 2021 Low-Resource Speech Translation Task
Viaarxiv icon

AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages

Add code
Apr 18, 2021
Figure 1 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Figure 2 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Figure 3 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Figure 4 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Viaarxiv icon

Few-shot Learning for Slot Tagging with Attentive Relational Network

Add code
Mar 03, 2021
Figure 1 for Few-shot Learning for Slot Tagging with Attentive Relational Network
Figure 2 for Few-shot Learning for Slot Tagging with Attentive Relational Network
Figure 3 for Few-shot Learning for Slot Tagging with Attentive Relational Network
Figure 4 for Few-shot Learning for Slot Tagging with Attentive Relational Network
Viaarxiv icon

Investigations on Audiovisual Emotion Recognition in Noisy Conditions

Add code
Mar 02, 2021
Figure 1 for Investigations on Audiovisual Emotion Recognition in Noisy Conditions
Figure 2 for Investigations on Audiovisual Emotion Recognition in Noisy Conditions
Figure 3 for Investigations on Audiovisual Emotion Recognition in Noisy Conditions
Figure 4 for Investigations on Audiovisual Emotion Recognition in Noisy Conditions
Viaarxiv icon

Meta-Learning for improving rare word recognition in end-to-end ASR

Add code
Feb 25, 2021
Figure 1 for Meta-Learning for improving rare word recognition in end-to-end ASR
Figure 2 for Meta-Learning for improving rare word recognition in end-to-end ASR
Figure 3 for Meta-Learning for improving rare word recognition in end-to-end ASR
Figure 4 for Meta-Learning for improving rare word recognition in end-to-end ASR
Viaarxiv icon

Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning

Add code
Dec 04, 2020
Figure 1 for Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning
Figure 2 for Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning
Figure 3 for Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning
Figure 4 for Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning
Viaarxiv icon

Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension

Add code
Oct 27, 2020
Figure 1 for Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
Figure 2 for Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
Figure 3 for Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
Figure 4 for Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
Viaarxiv icon

F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering

Add code
Oct 13, 2020
Figure 1 for F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Figure 2 for F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Figure 3 for F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Figure 4 for F1 is Not Enough! Models and Evaluation Towards User-Centered Explainable Question Answering
Viaarxiv icon