Alert button
Picture for Akari Asai

Akari Asai

Alert button

Fine-grained Hallucination Detection and Editing for Language Models

Jan 17, 2024
Abhika Mishra, Akari Asai, Vidhisha Balachandran, Yizhong Wang, Graham Neubig, Yulia Tsvetkov, Hannaneh Hajishirzi

Viaarxiv icon

Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection

Oct 17, 2023
Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, Hannaneh Hajishirzi

Viaarxiv icon

BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual Transfer

May 24, 2023
Akari Asai, Sneha Kudugunta, Xinyan Velocity Yu, Terra Blevins, Hila Gonen, Machel Reid, Yulia Tsvetkov, Sebastian Ruder, Hannaneh Hajishirzi

Figure 1 for BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual Transfer
Figure 2 for BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual Transfer
Figure 3 for BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual Transfer
Figure 4 for BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual Transfer
Viaarxiv icon

TaskWeb: Selecting Better Source Tasks for Multi-task NLP

May 22, 2023
Joongwon Kim, Akari Asai, Gabriel Ilharco, Hannaneh Hajishirzi

Figure 1 for TaskWeb: Selecting Better Source Tasks for Multi-task NLP
Figure 2 for TaskWeb: Selecting Better Source Tasks for Multi-task NLP
Figure 3 for TaskWeb: Selecting Better Source Tasks for Multi-task NLP
Figure 4 for TaskWeb: Selecting Better Source Tasks for Multi-task NLP
Viaarxiv icon

xPQA: Cross-Lingual Product Question Answering across 12 Languages

May 16, 2023
Xiaoyu Shen, Akari Asai, Bill Byrne, Adrià de Gispert

Figure 1 for xPQA: Cross-Lingual Product Question Answering across 12 Languages
Figure 2 for xPQA: Cross-Lingual Product Question Answering across 12 Languages
Figure 3 for xPQA: Cross-Lingual Product Question Answering across 12 Languages
Figure 4 for xPQA: Cross-Lingual Product Question Answering across 12 Languages
Viaarxiv icon

AfriQA: Cross-lingual Open-Retrieval Question Answering for African Languages

May 11, 2023
Odunayo Ogundepo, Tajuddeen R. Gwadabe, Clara E. Rivera, Jonathan H. Clark, Sebastian Ruder, David Ifeoluwa Adelani, Bonaventure F. P. Dossou, Abdou Aziz DIOP, Claytone Sikasote, Gilles Hacheme, Happy Buzaaba, Ignatius Ezeani, Rooweither Mabuya, Salomey Osei, Chris Emezue, Albert Njoroge Kahira, Shamsuddeen H. Muhammad, Akintunde Oladipo, Abraham Toluwase Owodunni, Atnafu Lambebo Tonja, Iyanuoluwa Shode, Akari Asai, Tunde Oluwaseyi Ajayi, Clemencia Siro, Steven Arthur, Mofetoluwa Adeyemi, Orevaoghene Ahia, Aremu Anuoluwapo, Oyinkansola Awosan, Chiamaka Chukwuneke, Bernard Opoku, Awokoya Ayodele, Verrah Otiende, Christine Mwase, Boyd Sinkala, Andre Niyongabo Rubungo, Daniel A. Ajisafe, Emeka Felix Onwuegbuzia, Habib Mbow, Emile Niyomutabazi, Eunice Mukonde, Falalu Ibrahim Lawan, Ibrahim Said Ahmad, Jesujoba O. Alabi, Martin Namukombo, Mbonu Chinedu, Mofya Phiri, Neo Putini, Ndumiso Mngoma, Priscilla A. Amuok, Ruqayya Nasir Iro, Sonia Adhiambo

Figure 1 for AfriQA: Cross-lingual Open-Retrieval Question Answering for African Languages
Figure 2 for AfriQA: Cross-lingual Open-Retrieval Question Answering for African Languages
Figure 3 for AfriQA: Cross-lingual Open-Retrieval Question Answering for African Languages
Figure 4 for AfriQA: Cross-lingual Open-Retrieval Question Answering for African Languages
Viaarxiv icon

How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval

Feb 15, 2023
Sheng-Chieh Lin, Akari Asai, Minghan Li, Barlas Oguz, Jimmy Lin, Yashar Mehdad, Wen-tau Yih, Xilun Chen

Figure 1 for How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval
Figure 2 for How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval
Figure 3 for How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval
Figure 4 for How to Train Your DRAGON: Diverse Augmentation Towards Generalizable Dense Retrieval
Viaarxiv icon

When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories

Dec 20, 2022
Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Hannaneh Hajishirzi, Daniel Khashabi

Figure 1 for When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories
Figure 2 for When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories
Figure 3 for When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories
Figure 4 for When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories
Viaarxiv icon

Beyond Counting Datasets: A Survey of Multilingual Dataset Construction and Necessary Resources

Nov 28, 2022
Xinyan Velocity Yu, Akari Asai, Trina Chatterjee, Junjie Hu, Eunsol Choi

Figure 1 for Beyond Counting Datasets: A Survey of Multilingual Dataset Construction and Necessary Resources
Figure 2 for Beyond Counting Datasets: A Survey of Multilingual Dataset Construction and Necessary Resources
Figure 3 for Beyond Counting Datasets: A Survey of Multilingual Dataset Construction and Necessary Resources
Figure 4 for Beyond Counting Datasets: A Survey of Multilingual Dataset Construction and Necessary Resources
Viaarxiv icon

Task-aware Retrieval with Instructions

Nov 16, 2022
Akari Asai, Timo Schick, Patrick Lewis, Xilun Chen, Gautier Izacard, Sebastian Riedel, Hannaneh Hajishirzi, Wen-tau Yih

Figure 1 for Task-aware Retrieval with Instructions
Figure 2 for Task-aware Retrieval with Instructions
Figure 3 for Task-aware Retrieval with Instructions
Figure 4 for Task-aware Retrieval with Instructions
Viaarxiv icon