Alert button
Picture for Wael Hamza

Wael Hamza

Alert button

Towards ASR Robust Spoken Language Understanding Through In-Context Learning With Word Confusion Networks

Add code
Bookmark button
Alert button
Jan 05, 2024
Kevin Everson, Yile Gu, Huck Yang, Prashanth Gurunath Shivakumar, Guan-Ting Lin, Jari Kolehmainen, Ivan Bulyko, Ankur Gandhe, Shalini Ghosh, Wael Hamza, Hung-yi Lee, Ariya Rastrow, Andreas Stolcke

Viaarxiv icon

Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models

Add code
Bookmark button
Alert button
Jun 14, 2023
Saleh Soltan, Andy Rosenbaum, Tobias Falke, Qin Lu, Anna Rumshisky, Wael Hamza

Figure 1 for Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models
Figure 2 for Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models
Figure 3 for Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models
Figure 4 for Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models
Viaarxiv icon

Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data

Add code
Bookmark button
Alert button
Apr 04, 2023
Vladislav Lialin, Stephen Rawls, David Chan, Shalini Ghosh, Anna Rumshisky, Wael Hamza

Figure 1 for Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data
Figure 2 for Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data
Figure 3 for Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data
Figure 4 for Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data
Viaarxiv icon

Low-Resource Compositional Semantic Parsing with Concept Pretraining

Add code
Bookmark button
Alert button
Jan 30, 2023
Subendhu Rongali, Mukund Sridhar, Haidar Khan, Konstantine Arkoudas, Wael Hamza, Andrew McCallum

Figure 1 for Low-Resource Compositional Semantic Parsing with Concept Pretraining
Figure 2 for Low-Resource Compositional Semantic Parsing with Concept Pretraining
Figure 3 for Low-Resource Compositional Semantic Parsing with Concept Pretraining
Figure 4 for Low-Resource Compositional Semantic Parsing with Concept Pretraining
Viaarxiv icon

CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing

Add code
Bookmark button
Alert button
Oct 14, 2022
Andy Rosenbaum, Saleh Soltan, Wael Hamza, Amir Saffari, Marco Damonte, Isabel Groves

Figure 1 for CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing
Figure 2 for CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing
Figure 3 for CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing
Figure 4 for CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing
Viaarxiv icon

LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging

Add code
Bookmark button
Alert button
Sep 20, 2022
Andy Rosenbaum, Saleh Soltan, Wael Hamza, Yannick Versley, Markus Boese

Figure 1 for LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging
Figure 2 for LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging
Figure 3 for LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging
Figure 4 for LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging
Viaarxiv icon

AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model

Add code
Bookmark button
Alert button
Aug 03, 2022
Saleh Soltan, Shankar Ananthakrishnan, Jack FitzGerald, Rahul Gupta, Wael Hamza, Haidar Khan, Charith Peris, Stephen Rawls, Andy Rosenbaum, Anna Rumshisky, Chandana Satya Prakash, Mukund Sridhar, Fabian Triefenbach, Apurv Verma, Gokhan Tur, Prem Natarajan

Figure 1 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 2 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 3 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 4 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Viaarxiv icon

Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems

Add code
Bookmark button
Alert button
Jun 15, 2022
Jack FitzGerald, Shankar Ananthakrishnan, Konstantine Arkoudas, Davide Bernardi, Abhishek Bhagia, Claudio Delli Bovi, Jin Cao, Rakesh Chada, Amit Chauhan, Luoxin Chen, Anurag Dwarakanath, Satyam Dwivedi, Turan Gojayev, Karthik Gopalakrishnan, Thomas Gueudre, Dilek Hakkani-Tur, Wael Hamza, Jonathan Hueser, Kevin Martin Jose, Haidar Khan, Beiye Liu, Jianhua Lu, Alessandro Manzotti, Pradeep Natarajan, Karolina Owczarzak, Gokmen Oz, Enrico Palumbo, Charith Peris, Chandana Satya Prakash, Stephen Rawls, Andy Rosenbaum, Anjali Shenoy, Saleh Soltan, Mukund Harakere Sridhar, Liz Tan, Fabian Triefenbach, Pan Wei, Haiyang Yu, Shuai Zheng, Gokhan Tur, Prem Natarajan

Figure 1 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 2 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 3 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 4 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Viaarxiv icon

Training Naturalized Semantic Parsers with Very Little Data

Add code
Bookmark button
Alert button
May 04, 2022
Subendhu Rongali, Konstantine Arkoudas, Melanie Rubino, Wael Hamza

Figure 1 for Training Naturalized Semantic Parsers with Very Little Data
Figure 2 for Training Naturalized Semantic Parsers with Very Little Data
Figure 3 for Training Naturalized Semantic Parsers with Very Little Data
Figure 4 for Training Naturalized Semantic Parsers with Very Little Data
Viaarxiv icon