Alert button
Picture for Chandra Bhagavatula

Chandra Bhagavatula

Alert button

NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation

Add code
Bookmark button
Alert button
Dec 10, 2023
Peter West, Ronan Le Bras, Taylor Sorensen, Bill Yuchen Lin, Liwei Jiang, Ximing Lu, Khyathi Chandu, Jack Hessel, Ashutosh Baheti, Chandra Bhagavatula, Yejin Choi

Figure 1 for NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation
Figure 2 for NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation
Figure 3 for NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation
Figure 4 for NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation
Viaarxiv icon

The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning

Add code
Bookmark button
Alert button
Dec 04, 2023
Bill Yuchen Lin, Abhilasha Ravichander, Ximing Lu, Nouha Dziri, Melanie Sclar, Khyathi Chandu, Chandra Bhagavatula, Yejin Choi

Figure 1 for The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning
Figure 2 for The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning
Figure 3 for The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning
Figure 4 for The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning
Viaarxiv icon

"You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation

Add code
Bookmark button
Alert button
Oct 26, 2023
Allyson Ettinger, Jena D. Hwang, Valentina Pyatkin, Chandra Bhagavatula, Yejin Choi

Figure 1 for "You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation
Figure 2 for "You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation
Figure 3 for "You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation
Figure 4 for "You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of Abstract Meaning Representation
Viaarxiv icon

Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement

Add code
Bookmark button
Alert button
Oct 12, 2023
Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, Xiang Ren

Figure 1 for Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
Figure 2 for Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
Figure 3 for Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
Figure 4 for Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
Viaarxiv icon

Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties

Add code
Bookmark button
Alert button
Sep 02, 2023
Taylor Sorensen, Liwei Jiang, Jena Hwang, Sydney Levine, Valentina Pyatkin, Peter West, Nouha Dziri, Ximing Lu, Kavel Rao, Chandra Bhagavatula, Maarten Sap, John Tasioulas, Yejin Choi

Figure 1 for Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
Figure 2 for Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
Figure 3 for Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
Figure 4 for Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
Viaarxiv icon

Faith and Fate: Limits of Transformers on Compositionality

Add code
Bookmark button
Alert button
Jun 01, 2023
Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, Yejin Choi

Figure 1 for Faith and Fate: Limits of Transformers on Compositionality
Figure 2 for Faith and Fate: Limits of Transformers on Compositionality
Figure 3 for Faith and Fate: Limits of Transformers on Compositionality
Figure 4 for Faith and Fate: Limits of Transformers on Compositionality
Viaarxiv icon

PlaSma: Making Small Language Models Better Procedural Knowledge Models for (Counterfactual) Planning

Add code
Bookmark button
Alert button
May 31, 2023
Faeze Brahman, Chandra Bhagavatula, Valentina Pyatkin, Jena D. Hwang, Xiang Lorraine Li, Hirona J. Arai, Soumya Sanyal, Keisuke Sakaguchi, Xiang Ren, Yejin Choi

Figure 1 for PlaSma: Making Small Language Models Better Procedural Knowledge Models for (Counterfactual) Planning
Figure 2 for PlaSma: Making Small Language Models Better Procedural Knowledge Models for (Counterfactual) Planning
Figure 3 for PlaSma: Making Small Language Models Better Procedural Knowledge Models for (Counterfactual) Planning
Figure 4 for PlaSma: Making Small Language Models Better Procedural Knowledge Models for (Counterfactual) Planning
Viaarxiv icon

SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks

Add code
Bookmark button
Alert button
May 27, 2023
Bill Yuchen Lin, Yicheng Fu, Karina Yang, Prithviraj Ammanabrolu, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Yejin Choi, Xiang Ren

Figure 1 for SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks
Figure 2 for SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks
Figure 3 for SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks
Figure 4 for SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks
Viaarxiv icon

Reinforced Clarification Question Generation with Defeasibility Rewards for Disambiguating Social and Moral Situations

Add code
Bookmark button
Alert button
Dec 20, 2022
Valentina Pyatkin, Jena D. Hwang, Vivek Srikumar, Ximing Lu, Liwei Jiang, Yejin Choi, Chandra Bhagavatula

Figure 1 for Reinforced Clarification Question Generation with Defeasibility Rewards for Disambiguating Social and Moral Situations
Figure 2 for Reinforced Clarification Question Generation with Defeasibility Rewards for Disambiguating Social and Moral Situations
Figure 3 for Reinforced Clarification Question Generation with Defeasibility Rewards for Disambiguating Social and Moral Situations
Figure 4 for Reinforced Clarification Question Generation with Defeasibility Rewards for Disambiguating Social and Moral Situations
Viaarxiv icon

I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation

Add code
Bookmark button
Alert button
Dec 19, 2022
Chandra Bhagavatula, Jena D. Hwang, Doug Downey, Ronan Le Bras, Ximing Lu, Keisuke Sakaguchi, Swabha Swayamdipta, Peter West, Yejin Choi

Figure 1 for I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation
Figure 2 for I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation
Figure 3 for I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation
Figure 4 for I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation
Viaarxiv icon