Alert button
Picture for Ximing Lu

Ximing Lu

Alert button

Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement

Add code
Bookmark button
Alert button
Oct 12, 2023
Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, Xiang Ren

Figure 1 for Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
Figure 2 for Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
Figure 3 for Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
Figure 4 for Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
Viaarxiv icon

Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties

Add code
Bookmark button
Alert button
Sep 02, 2023
Taylor Sorensen, Liwei Jiang, Jena Hwang, Sydney Levine, Valentina Pyatkin, Peter West, Nouha Dziri, Ximing Lu, Kavel Rao, Chandra Bhagavatula, Maarten Sap, John Tasioulas, Yejin Choi

Figure 1 for Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
Figure 2 for Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
Figure 3 for Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
Figure 4 for Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties
Viaarxiv icon

Faith and Fate: Limits of Transformers on Compositionality

Add code
Bookmark button
Alert button
Jun 01, 2023
Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, Yejin Choi

Figure 1 for Faith and Fate: Limits of Transformers on Compositionality
Figure 2 for Faith and Fate: Limits of Transformers on Compositionality
Figure 3 for Faith and Fate: Limits of Transformers on Compositionality
Figure 4 for Faith and Fate: Limits of Transformers on Compositionality
Viaarxiv icon

Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing

Add code
Bookmark button
Alert button
May 26, 2023
Jaehun Jung, Peter West, Liwei Jiang, Faeze Brahman, Ximing Lu, Jillian Fisher, Taylor Sorensen, Yejin Choi

Figure 1 for Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Figure 2 for Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Figure 3 for Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Figure 4 for Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Viaarxiv icon

Inference-Time Policy Adapters (IPA): Tailoring Extreme-Scale LMs without Fine-tuning

Add code
Bookmark button
Alert button
May 24, 2023
Ximing Lu, Faeze Brahman, Peter West, Jaehun Jang, Khyathi Chandu, Abhilasha Ravichander, Lianhui Qin, Prithviraj Ammanabrolu, Liwei Jiang, Sahana Ramnath, Nouha Dziri, Jillian Fisher, Bill Yuchen Lin, Skyler Hallinan, Xiang Ren, Sean Welleck, Yejin Choi

Figure 1 for Inference-Time Policy Adapters (IPA): Tailoring Extreme-Scale LMs without Fine-tuning
Figure 2 for Inference-Time Policy Adapters (IPA): Tailoring Extreme-Scale LMs without Fine-tuning
Figure 3 for Inference-Time Policy Adapters (IPA): Tailoring Extreme-Scale LMs without Fine-tuning
Figure 4 for Inference-Time Policy Adapters (IPA): Tailoring Extreme-Scale LMs without Fine-tuning
Viaarxiv icon

Improving Language Models with Advantage-based Offline Policy Gradients

Add code
Bookmark button
Alert button
May 24, 2023
Ashutosh Baheti, Ximing Lu, Faeze Brahman, Ronan Le Bras, Maarten Sap, Mark Riedl

Figure 1 for Improving Language Models with Advantage-based Offline Policy Gradients
Figure 2 for Improving Language Models with Advantage-based Offline Policy Gradients
Figure 3 for Improving Language Models with Advantage-based Offline Policy Gradients
Figure 4 for Improving Language Models with Advantage-based Offline Policy Gradients
Viaarxiv icon

SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization

Add code
Bookmark button
Alert button
Dec 20, 2022
Hyunwoo Kim, Jack Hessel, Liwei Jiang, Ximing Lu, Youngjae Yu, Pei Zhou, Ronan Le Bras, Malihe Alikhani, Gunhee Kim, Maarten Sap, Yejin Choi

Figure 1 for SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization
Figure 2 for SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization
Figure 3 for SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization
Figure 4 for SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization
Viaarxiv icon

Reinforced Clarification Question Generation with Defeasibility Rewards for Disambiguating Social and Moral Situations

Add code
Bookmark button
Alert button
Dec 20, 2022
Valentina Pyatkin, Jena D. Hwang, Vivek Srikumar, Ximing Lu, Liwei Jiang, Yejin Choi, Chandra Bhagavatula

Figure 1 for Reinforced Clarification Question Generation with Defeasibility Rewards for Disambiguating Social and Moral Situations
Figure 2 for Reinforced Clarification Question Generation with Defeasibility Rewards for Disambiguating Social and Moral Situations
Figure 3 for Reinforced Clarification Question Generation with Defeasibility Rewards for Disambiguating Social and Moral Situations
Figure 4 for Reinforced Clarification Question Generation with Defeasibility Rewards for Disambiguating Social and Moral Situations
Viaarxiv icon

I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation

Add code
Bookmark button
Alert button
Dec 19, 2022
Chandra Bhagavatula, Jena D. Hwang, Doug Downey, Ronan Le Bras, Ximing Lu, Keisuke Sakaguchi, Swabha Swayamdipta, Peter West, Yejin Choi

Figure 1 for I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation
Figure 2 for I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation
Figure 3 for I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation
Figure 4 for I2D2: Inductive Knowledge Distillation with NeuroLogic and Self-Imitation
Viaarxiv icon

Generating Sequences by Learning to Self-Correct

Add code
Bookmark button
Alert button
Oct 31, 2022
Sean Welleck, Ximing Lu, Peter West, Faeze Brahman, Tianxiao Shen, Daniel Khashabi, Yejin Choi

Figure 1 for Generating Sequences by Learning to Self-Correct
Figure 2 for Generating Sequences by Learning to Self-Correct
Figure 3 for Generating Sequences by Learning to Self-Correct
Figure 4 for Generating Sequences by Learning to Self-Correct
Viaarxiv icon