Alert button
Picture for Abteen Ebrahimi

Abteen Ebrahimi

Alert button

Since the Scientific Literature Is Multilingual, Our Models Should Be Too

Add code
Bookmark button
Alert button
Mar 27, 2024
Abteen Ebrahimi, Kenneth Church

Viaarxiv icon

Meeting the Needs of Low-Resource Languages: The Value of Automatic Alignments via Pretrained Models

Add code
Bookmark button
Alert button
Feb 15, 2023
Abteen Ebrahimi, Arya D. McCarthy, Arturo Oncevay, Luis Chiruzzo, John E. Ortega, Gustavo A. Giménez-Lugo, Rolando Coto-Solano, Katharina Kann

Figure 1 for Meeting the Needs of Low-Resource Languages: The Value of Automatic Alignments via Pretrained Models
Figure 2 for Meeting the Needs of Low-Resource Languages: The Value of Automatic Alignments via Pretrained Models
Figure 3 for Meeting the Needs of Low-Resource Languages: The Value of Automatic Alignments via Pretrained Models
Figure 4 for Meeting the Needs of Low-Resource Languages: The Value of Automatic Alignments via Pretrained Models
Viaarxiv icon

How to Adapt Your Pretrained Multilingual Model to 1600 Languages

Add code
Bookmark button
Alert button
Jun 03, 2021
Abteen Ebrahimi, Katharina Kann

Figure 1 for How to Adapt Your Pretrained Multilingual Model to 1600 Languages
Figure 2 for How to Adapt Your Pretrained Multilingual Model to 1600 Languages
Figure 3 for How to Adapt Your Pretrained Multilingual Model to 1600 Languages
Figure 4 for How to Adapt Your Pretrained Multilingual Model to 1600 Languages
Viaarxiv icon

AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages

Add code
Bookmark button
Alert button
Apr 18, 2021
Abteen Ebrahimi, Manuel Mager, Arturo Oncevay, Vishrav Chaudhary, Luis Chiruzzo, Angela Fan, John Ortega, Ricardo Ramos, Annette Rios, Ivan Vladimir, Gustavo A. Giménez-Lugo, Elisabeth Mager, Graham Neubig, Alexis Palmer, Rolando A. Coto Solano, Ngoc Thang Vu, Katharina Kann

Figure 1 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Figure 2 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Figure 3 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Figure 4 for AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages
Viaarxiv icon

Athena: Constructing Dialogues Dynamically with Discourse Constraints

Add code
Bookmark button
Alert button
Nov 21, 2020
Vrindavan Harrison, Juraj Juraska, Wen Cui, Lena Reed, Kevin K. Bowden, Jiaqi Wu, Brian Schwarzmann, Abteen Ebrahimi, Rishi Rajasekaran, Nikhil Varghese, Max Wechsler-Azen, Steve Whittaker, Jeffrey Flanigan, Marilyn Walker

Figure 1 for Athena: Constructing Dialogues Dynamically with Discourse Constraints
Figure 2 for Athena: Constructing Dialogues Dynamically with Discourse Constraints
Figure 3 for Athena: Constructing Dialogues Dynamically with Discourse Constraints
Figure 4 for Athena: Constructing Dialogues Dynamically with Discourse Constraints
Viaarxiv icon

Curate and Generate: A Corpus and Method for Joint Control of Semantics and Style in Neural NLG

Add code
Bookmark button
Alert button
Jun 14, 2019
Shereen Oraby, Vrindavan Harrison, Abteen Ebrahimi, Marilyn Walker

Figure 1 for Curate and Generate: A Corpus and Method for Joint Control of Semantics and Style in Neural NLG
Figure 2 for Curate and Generate: A Corpus and Method for Joint Control of Semantics and Style in Neural NLG
Figure 3 for Curate and Generate: A Corpus and Method for Joint Control of Semantics and Style in Neural NLG
Figure 4 for Curate and Generate: A Corpus and Method for Joint Control of Semantics and Style in Neural NLG
Viaarxiv icon