Alert button
Picture for Chao Xing

Chao Xing

Alert button

Multimodal Audio-textual Architecture for Robust Spoken Language Understanding

Add code
Bookmark button
Alert button
Jun 13, 2023
Anderson R. Avila, Mehdi Rezagholizadeh, Chao Xing

Figure 1 for Multimodal Audio-textual Architecture for Robust Spoken Language Understanding
Figure 2 for Multimodal Audio-textual Architecture for Robust Spoken Language Understanding
Figure 3 for Multimodal Audio-textual Architecture for Robust Spoken Language Understanding
Figure 4 for Multimodal Audio-textual Architecture for Robust Spoken Language Understanding
Viaarxiv icon

DenseShift: Towards Accurate and Transferable Low-Bit Shift Network

Add code
Bookmark button
Alert button
Aug 20, 2022
Xinlin Li, Bang Liu, Rui Heng Yang, Vanessa Courville, Chao Xing, Vahid Partovi Nia

Figure 1 for DenseShift: Towards Accurate and Transferable Low-Bit Shift Network
Figure 2 for DenseShift: Towards Accurate and Transferable Low-Bit Shift Network
Figure 3 for DenseShift: Towards Accurate and Transferable Low-Bit Shift Network
Figure 4 for DenseShift: Towards Accurate and Transferable Low-Bit Shift Network
Viaarxiv icon

Low-bit Shift Network for End-to-End Spoken Language Understanding

Add code
Bookmark button
Alert button
Jul 15, 2022
Anderson R. Avila, Khalil Bibi, Rui Heng Yang, Xinlin Li, Chao Xing, Xiao Chen

Figure 1 for Low-bit Shift Network for End-to-End Spoken Language Understanding
Figure 2 for Low-bit Shift Network for End-to-End Spoken Language Understanding
Figure 3 for Low-bit Shift Network for End-to-End Spoken Language Understanding
Figure 4 for Low-bit Shift Network for End-to-End Spoken Language Understanding
Viaarxiv icon

Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding

Add code
Bookmark button
Alert button
May 21, 2022
Abbas Ghaddar, Yimeng Wu, Sunyam Bagga, Ahmad Rashid, Khalil Bibi, Mehdi Rezagholizadeh, Chao Xing, Yasheng Wang, Duan Xinyu, Zhefeng Wang, Baoxing Huai, Xin Jiang, Qun Liu, Philippe Langlais

Figure 1 for Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Figure 2 for Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Figure 3 for Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Figure 4 for Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Viaarxiv icon

JABER and SABER: Junior and Senior Arabic BERt

Add code
Bookmark button
Alert button
Jan 09, 2022
Abbas Ghaddar, Yimeng Wu, Ahmad Rashid, Khalil Bibi, Mehdi Rezagholizadeh, Chao Xing, Yasheng Wang, Duan Xinyu, Zhefeng Wang, Baoxing Huai, Xin Jiang, Qun Liu, Philippe Langlais

Figure 1 for JABER and SABER: Junior and Senior Arabic BERt
Figure 2 for JABER and SABER: Junior and Senior Arabic BERt
Figure 3 for JABER and SABER: Junior and Senior Arabic BERt
Figure 4 for JABER and SABER: Junior and Senior Arabic BERt
Viaarxiv icon

JABER: Junior Arabic BERt

Add code
Bookmark button
Alert button
Dec 08, 2021
Abbas Ghaddar, Yimeng Wu, Ahmad Rashid, Khalil Bibi, Mehdi Rezagholizadeh, Chao Xing, Yasheng Wang, Duan Xinyu, Zhefeng Wang, Baoxing Huai, Xin Jiang, Qun Liu, Philippe Langlais

Figure 1 for JABER: Junior Arabic BERt
Figure 2 for JABER: Junior Arabic BERt
Figure 3 for JABER: Junior Arabic BERt
Figure 4 for JABER: Junior Arabic BERt
Viaarxiv icon

A Streaming End-to-End Framework For Spoken Language Understanding

Add code
Bookmark button
Alert button
Jun 08, 2021
Nihal Potdar, Anderson R. Avila, Chao Xing, Dong Wang, Yiran Cao, Xiao Chen

Figure 1 for A Streaming End-to-End Framework For Spoken Language Understanding
Figure 2 for A Streaming End-to-End Framework For Spoken Language Understanding
Figure 3 for A Streaming End-to-End Framework For Spoken Language Understanding
Figure 4 for A Streaming End-to-End Framework For Spoken Language Understanding
Viaarxiv icon

Transformer-based ASR Incorporating Time-reduction Layer and Fine-tuning with Self-Knowledge Distillation

Add code
Bookmark button
Alert button
Mar 17, 2021
Md Akmal Haidar, Chao Xing, Mehdi Rezagholizadeh

Figure 1 for Transformer-based ASR Incorporating Time-reduction Layer and Fine-tuning with Self-Knowledge Distillation
Figure 2 for Transformer-based ASR Incorporating Time-reduction Layer and Fine-tuning with Self-Knowledge Distillation
Figure 3 for Transformer-based ASR Incorporating Time-reduction Layer and Fine-tuning with Self-Knowledge Distillation
Figure 4 for Transformer-based ASR Incorporating Time-reduction Layer and Fine-tuning with Self-Knowledge Distillation
Viaarxiv icon