Alert button
Picture for Chia-Yu Li

Chia-Yu Li

Alert button

Oh, Jeez! or Uh-huh? A Listener-aware Backchannel Predictor on ASR Transcriptions

Add code
Bookmark button
Alert button
Apr 10, 2023
Daniel Ortega, Chia-Yu Li, Ngoc Thang Vu

Figure 1 for Oh, Jeez! or Uh-huh? A Listener-aware Backchannel Predictor on ASR Transcriptions
Figure 2 for Oh, Jeez! or Uh-huh? A Listener-aware Backchannel Predictor on ASR Transcriptions
Figure 3 for Oh, Jeez! or Uh-huh? A Listener-aware Backchannel Predictor on ASR Transcriptions
Figure 4 for Oh, Jeez! or Uh-huh? A Listener-aware Backchannel Predictor on ASR Transcriptions
Viaarxiv icon

Integrating Knowledge in End-to-End Automatic Speech Recognition for Mandarin-English Code-Switching

Add code
Bookmark button
Alert button
Dec 19, 2021
Chia-Yu Li, Ngoc Thang Vu

Figure 1 for Integrating Knowledge in End-to-End Automatic Speech Recognition for Mandarin-English Code-Switching
Figure 2 for Integrating Knowledge in End-to-End Automatic Speech Recognition for Mandarin-English Code-Switching
Figure 3 for Integrating Knowledge in End-to-End Automatic Speech Recognition for Mandarin-English Code-Switching
Figure 4 for Integrating Knowledge in End-to-End Automatic Speech Recognition for Mandarin-English Code-Switching
Viaarxiv icon

Improving Code-switching Language Modeling with Artificially Generated Texts using Cycle-consistent Adversarial Networks

Add code
Bookmark button
Alert button
Dec 12, 2021
Chia-Yu Li, Ngoc Thang Vu

Figure 1 for Improving Code-switching Language Modeling with Artificially Generated Texts using Cycle-consistent Adversarial Networks
Figure 2 for Improving Code-switching Language Modeling with Artificially Generated Texts using Cycle-consistent Adversarial Networks
Figure 3 for Improving Code-switching Language Modeling with Artificially Generated Texts using Cycle-consistent Adversarial Networks
Figure 4 for Improving Code-switching Language Modeling with Artificially Generated Texts using Cycle-consistent Adversarial Networks
Viaarxiv icon

Improving Speech Recognition on Noisy Speech via Speech Enhancement with Multi-Discriminators CycleGAN

Add code
Bookmark button
Alert button
Dec 12, 2021
Chia-Yu Li, Ngoc Thang Vu

Figure 1 for Improving Speech Recognition on Noisy Speech via Speech Enhancement with Multi-Discriminators CycleGAN
Figure 2 for Improving Speech Recognition on Noisy Speech via Speech Enhancement with Multi-Discriminators CycleGAN
Figure 3 for Improving Speech Recognition on Noisy Speech via Speech Enhancement with Multi-Discriminators CycleGAN
Figure 4 for Improving Speech Recognition on Noisy Speech via Speech Enhancement with Multi-Discriminators CycleGAN
Viaarxiv icon

Investigations on Speech Recognition Systems for Low-Resource Dialectal Arabic-English Code-Switching Speech

Add code
Bookmark button
Alert button
Aug 29, 2021
Injy Hamed, Pavel Denisov, Chia-Yu Li, Mohamed Elmahdy, Slim Abdennadher, Ngoc Thang Vu

Figure 1 for Investigations on Speech Recognition Systems for Low-Resource Dialectal Arabic-English Code-Switching Speech
Figure 2 for Investigations on Speech Recognition Systems for Low-Resource Dialectal Arabic-English Code-Switching Speech
Figure 3 for Investigations on Speech Recognition Systems for Low-Resource Dialectal Arabic-English Code-Switching Speech
Figure 4 for Investigations on Speech Recognition Systems for Low-Resource Dialectal Arabic-English Code-Switching Speech
Viaarxiv icon

ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational Agents

Add code
Bookmark button
Alert button
May 04, 2020
Chia-Yu Li, Daniel Ortega, Dirk Väth, Florian Lux, Lindsey Vanderlyn, Maximilian Schmidt, Michael Neumann, Moritz Völkel, Pavel Denisov, Sabrina Jenne, Zorica Kacarevic, Ngoc Thang Vu

Figure 1 for ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational Agents
Figure 2 for ADVISER: A Toolkit for Developing Multi-modal, Multi-domain and Socially-engaged Conversational Agents
Viaarxiv icon

Context-aware Neural-based Dialog Act Classification on Automatically Generated Transcriptions

Add code
Bookmark button
Alert button
Feb 28, 2019
Daniel Ortega, Chia-Yu Li, Gisela Vallejo, Pavel Denisov, Ngoc Thang Vu

Figure 1 for Context-aware Neural-based Dialog Act Classification on Automatically Generated Transcriptions
Figure 2 for Context-aware Neural-based Dialog Act Classification on Automatically Generated Transcriptions
Figure 3 for Context-aware Neural-based Dialog Act Classification on Automatically Generated Transcriptions
Figure 4 for Context-aware Neural-based Dialog Act Classification on Automatically Generated Transcriptions
Viaarxiv icon