Alert button
Picture for Yejin Choi

Yejin Choi

Alert button

In Search of the Long-Tail: Systematic Generation of Long-Tail Knowledge via Logical Rule Guided Search

Add code
Bookmark button
Alert button
Nov 13, 2023
Huihan Li, Yuting Ning, Zeyi Liao, Siyuan Wang, Xiang Lorraine Li, Ximing Lu, Faeze Brahman, Wenting Zhao, Yejin Choi, Xiang Ren

Viaarxiv icon

STEER: Unified Style Transfer with Expert Reinforcement

Add code
Bookmark button
Alert button
Nov 13, 2023
Skyler Hallinan, Faeze Brahman, Ximing Lu, Jaehun Jung, Sean Welleck, Yejin Choi

Viaarxiv icon

Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs

Add code
Bookmark button
Alert button
Nov 09, 2023
Da Yin, Faeze Brahman, Abhilasha Ravichander, Khyathi Chandu, Kai-Wei Chang, Yejin Choi, Bill Yuchen Lin

Figure 1 for Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs
Figure 2 for Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs
Figure 3 for Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs
Figure 4 for Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs
Viaarxiv icon

Tailoring Self-Rationalizers with Multi-Reward Distillation

Add code
Bookmark button
Alert button
Nov 06, 2023
Sahana Ramnath, Brihi Joshi, Skyler Hallinan, Ximing Lu, Liunian Harold Li, Aaron Chan, Jack Hessel, Yejin Choi, Xiang Ren

Viaarxiv icon

What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations

Add code
Bookmark button
Alert button
Nov 01, 2023
Kavel Rao, Liwei Jiang, Valentina Pyatkin, Yuling Gu, Niket Tandon, Nouha Dziri, Faeze Brahman, Yejin Choi

Figure 1 for What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations
Figure 2 for What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations
Figure 3 for What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations
Figure 4 for What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations
Viaarxiv icon

The Generative AI Paradox: "What It Can Create, It May Not Understand"

Add code
Bookmark button
Alert button
Oct 31, 2023
Peter West, Ximing Lu, Nouha Dziri, Faeze Brahman, Linjie Li, Jena D. Hwang, Liwei Jiang, Jillian Fisher, Abhilasha Ravichander, Khyathi Chandu, Benjamin Newman, Pang Wei Koh, Allyson Ettinger, Yejin Choi

Figure 1 for The Generative AI Paradox: "What It Can Create, It May Not Understand"
Figure 2 for The Generative AI Paradox: "What It Can Create, It May Not Understand"
Figure 3 for The Generative AI Paradox: "What It Can Create, It May Not Understand"
Figure 4 for The Generative AI Paradox: "What It Can Create, It May Not Understand"
Viaarxiv icon

FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions

Add code
Bookmark button
Alert button
Oct 31, 2023
Hyunwoo Kim, Melanie Sclar, Xuhui Zhou, Ronan Le Bras, Gunhee Kim, Yejin Choi, Maarten Sap

Figure 1 for FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions
Figure 2 for FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions
Figure 3 for FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions
Figure 4 for FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions
Viaarxiv icon

Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory

Add code
Bookmark button
Alert button
Oct 27, 2023
Niloofar Mireshghallah, Hyunwoo Kim, Xuhui Zhou, Yulia Tsvetkov, Maarten Sap, Reza Shokri, Yejin Choi

Figure 1 for Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory
Figure 2 for Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory
Figure 3 for Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory
Figure 4 for Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory
Viaarxiv icon

"You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation

Add code
Bookmark button
Alert button
Oct 26, 2023
Allyson Ettinger, Jena D. Hwang, Valentina Pyatkin, Chandra Bhagavatula, Yejin Choi

Figure 1 for "You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation
Figure 2 for "You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation
Figure 3 for "You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation
Figure 4 for "You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation
Viaarxiv icon