Alert button
Picture for Andy Rosenbaum

Andy Rosenbaum

Alert button

GeMQuAD : Generating Multilingual Question Answering Datasets from Large Language Models using Few Shot Learning

Add code
Bookmark button
Alert button
Apr 14, 2024
Amani Namboori, Shivam Mangale, Andy Rosenbaum, Saleh Soltan

Viaarxiv icon

Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models

Add code
Bookmark button
Alert button
Jun 14, 2023
Saleh Soltan, Andy Rosenbaum, Tobias Falke, Qin Lu, Anna Rumshisky, Wael Hamza

Figure 1 for Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models
Figure 2 for Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models
Figure 3 for Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models
Figure 4 for Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models
Viaarxiv icon

PLACES: Prompting Language Models for Social Conversation Synthesis

Add code
Bookmark button
Alert button
Feb 17, 2023
Maximillian Chen, Alexandros Papangelis, Chenyang Tao, Seokhwan Kim, Andy Rosenbaum, Yang Liu, Zhou Yu, Dilek Hakkani-Tur

Figure 1 for PLACES: Prompting Language Models for Social Conversation Synthesis
Figure 2 for PLACES: Prompting Language Models for Social Conversation Synthesis
Figure 3 for PLACES: Prompting Language Models for Social Conversation Synthesis
Figure 4 for PLACES: Prompting Language Models for Social Conversation Synthesis
Viaarxiv icon

Weakly Supervised Data Augmentation Through Prompting for Dialogue Understanding

Add code
Bookmark button
Alert button
Nov 02, 2022
Maximillian Chen, Alexandros Papangelis, Chenyang Tao, Andy Rosenbaum, Seokhwan Kim, Yang Liu, Zhou Yu, Dilek Hakkani-Tur

Figure 1 for Weakly Supervised Data Augmentation Through Prompting for Dialogue Understanding
Figure 2 for Weakly Supervised Data Augmentation Through Prompting for Dialogue Understanding
Figure 3 for Weakly Supervised Data Augmentation Through Prompting for Dialogue Understanding
Figure 4 for Weakly Supervised Data Augmentation Through Prompting for Dialogue Understanding
Viaarxiv icon

CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing

Add code
Bookmark button
Alert button
Oct 14, 2022
Andy Rosenbaum, Saleh Soltan, Wael Hamza, Amir Saffari, Marco Damonte, Isabel Groves

Figure 1 for CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing
Figure 2 for CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing
Figure 3 for CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing
Figure 4 for CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing
Viaarxiv icon

LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging

Add code
Bookmark button
Alert button
Sep 20, 2022
Andy Rosenbaum, Saleh Soltan, Wael Hamza, Yannick Versley, Markus Boese

Figure 1 for LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging
Figure 2 for LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging
Figure 3 for LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging
Figure 4 for LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging
Viaarxiv icon

AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model

Add code
Bookmark button
Alert button
Aug 03, 2022
Saleh Soltan, Shankar Ananthakrishnan, Jack FitzGerald, Rahul Gupta, Wael Hamza, Haidar Khan, Charith Peris, Stephen Rawls, Andy Rosenbaum, Anna Rumshisky, Chandana Satya Prakash, Mukund Sridhar, Fabian Triefenbach, Apurv Verma, Gokhan Tur, Prem Natarajan

Figure 1 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 2 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 3 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Figure 4 for AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Viaarxiv icon

Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems

Add code
Bookmark button
Alert button
Jun 15, 2022
Jack FitzGerald, Shankar Ananthakrishnan, Konstantine Arkoudas, Davide Bernardi, Abhishek Bhagia, Claudio Delli Bovi, Jin Cao, Rakesh Chada, Amit Chauhan, Luoxin Chen, Anurag Dwarakanath, Satyam Dwivedi, Turan Gojayev, Karthik Gopalakrishnan, Thomas Gueudre, Dilek Hakkani-Tur, Wael Hamza, Jonathan Hueser, Kevin Martin Jose, Haidar Khan, Beiye Liu, Jianhua Lu, Alessandro Manzotti, Pradeep Natarajan, Karolina Owczarzak, Gokmen Oz, Enrico Palumbo, Charith Peris, Chandana Satya Prakash, Stephen Rawls, Andy Rosenbaum, Anjali Shenoy, Saleh Soltan, Mukund Harakere Sridhar, Liz Tan, Fabian Triefenbach, Pan Wei, Haiyang Yu, Shuai Zheng, Gokhan Tur, Prem Natarajan

Figure 1 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 2 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 3 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Figure 4 for Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
Viaarxiv icon