Alert button
Picture for Nouha Dziri

Nouha Dziri

Alert button

CULTURE-GEN: Revealing Global Cultural Perception in Language Models through Natural Language Prompting

Add code
Bookmark button
Alert button
Apr 16, 2024
Huihan Li, Liwei Jiang, Nouha Dziri, Xiang Ren, Yejin Choi

Viaarxiv icon

RewardBench: Evaluating Reward Models for Language Modeling

Add code
Bookmark button
Alert button
Mar 20, 2024
Nathan Lambert, Valentina Pyatkin, Jacob Morrison, LJ Miranda, Bill Yuchen Lin, Khyathi Chandu, Nouha Dziri, Sachin Kumar, Tom Zick, Yejin Choi, Noah A. Smith, Hannaneh Hajishirzi

Figure 1 for RewardBench: Evaluating Reward Models for Language Modeling
Figure 2 for RewardBench: Evaluating Reward Models for Language Modeling
Figure 3 for RewardBench: Evaluating Reward Models for Language Modeling
Figure 4 for RewardBench: Evaluating Reward Models for Language Modeling
Viaarxiv icon

A Roadmap to Pluralistic Alignment

Add code
Bookmark button
Alert button
Feb 07, 2024
Taylor Sorensen, Jared Moore, Jillian Fisher, Mitchell Gordon, Niloofar Mireshghallah, Christopher Michael Rytting, Andre Ye, Liwei Jiang, Ximing Lu, Nouha Dziri, Tim Althoff, Yejin Choi

Viaarxiv icon

The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning

Add code
Bookmark button
Alert button
Dec 04, 2023
Bill Yuchen Lin, Abhilasha Ravichander, Ximing Lu, Nouha Dziri, Melanie Sclar, Khyathi Chandu, Chandra Bhagavatula, Yejin Choi

Figure 1 for The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning
Figure 2 for The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning
Figure 3 for The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning
Figure 4 for The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning
Viaarxiv icon

What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations

Add code
Bookmark button
Alert button
Nov 01, 2023
Kavel Rao, Liwei Jiang, Valentina Pyatkin, Yuling Gu, Niket Tandon, Nouha Dziri, Faeze Brahman, Yejin Choi

Figure 1 for What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations
Figure 2 for What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations
Figure 3 for What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations
Figure 4 for What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations
Viaarxiv icon

The Generative AI Paradox: "What It Can Create, It May Not Understand"

Add code
Bookmark button
Alert button
Oct 31, 2023
Peter West, Ximing Lu, Nouha Dziri, Faeze Brahman, Linjie Li, Jena D. Hwang, Liwei Jiang, Jillian Fisher, Abhilasha Ravichander, Khyathi Chandu, Benjamin Newman, Pang Wei Koh, Allyson Ettinger, Yejin Choi

Figure 1 for The Generative AI Paradox: "What It Can Create, It May Not Understand"
Figure 2 for The Generative AI Paradox: "What It Can Create, It May Not Understand"
Figure 3 for The Generative AI Paradox: "What It Can Create, It May Not Understand"
Figure 4 for The Generative AI Paradox: "What It Can Create, It May Not Understand"
Viaarxiv icon

Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement

Add code
Bookmark button
Alert button
Oct 12, 2023
Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, Xiang Ren

Figure 1 for Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
Figure 2 for Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
Figure 3 for Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
Figure 4 for Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
Viaarxiv icon

Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties

Add code
Bookmark button
Alert button
Sep 02, 2023
Taylor Sorensen, Liwei Jiang, Jena Hwang, Sydney Levine, Valentina Pyatkin, Peter West, Nouha Dziri, Ximing Lu, Kavel Rao, Chandra Bhagavatula, Maarten Sap, John Tasioulas, Yejin Choi

Figure 1 for Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
Figure 2 for Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
Figure 3 for Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
Figure 4 for Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
Viaarxiv icon

Fine-Grained Human Feedback Gives Better Rewards for Language Model Training

Add code
Bookmark button
Alert button
Jun 02, 2023
Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A. Smith, Mari Ostendorf, Hannaneh Hajishirzi

Figure 1 for Fine-Grained Human Feedback Gives Better Rewards for Language Model Training
Figure 2 for Fine-Grained Human Feedback Gives Better Rewards for Language Model Training
Figure 3 for Fine-Grained Human Feedback Gives Better Rewards for Language Model Training
Figure 4 for Fine-Grained Human Feedback Gives Better Rewards for Language Model Training
Viaarxiv icon