Alert button
Picture for Yejin Choi

Yejin Choi

Alert button

Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step

Add code
Bookmark button
Alert button
Jun 24, 2023
Liunian Harold Li, Jack Hessel, Youngjae Yu, Xiang Ren, Kai-Wei Chang, Yejin Choi

Figure 1 for Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step
Figure 2 for Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step
Figure 3 for Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step
Figure 4 for Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step
Viaarxiv icon

Commonsense Knowledge Transfer for Pre-trained Language Models

Add code
Bookmark button
Alert button
Jun 04, 2023
Wangchunshu Zhou, Ronan Le Bras, Yejin Choi

Figure 1 for Commonsense Knowledge Transfer for Pre-trained Language Models
Figure 2 for Commonsense Knowledge Transfer for Pre-trained Language Models
Figure 3 for Commonsense Knowledge Transfer for Pre-trained Language Models
Figure 4 for Commonsense Knowledge Transfer for Pre-trained Language Models
Viaarxiv icon

Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference

Add code
Bookmark button
Alert button
Jun 04, 2023
Wangchunshu Zhou, Ronan Le Bras, Yejin Choi

Figure 1 for Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference
Figure 2 for Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference
Figure 3 for Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference
Figure 4 for Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference
Viaarxiv icon

Faith and Fate: Limits of Transformers on Compositionality

Add code
Bookmark button
Alert button
Jun 01, 2023
Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, Yejin Choi

Figure 1 for Faith and Fate: Limits of Transformers on Compositionality
Figure 2 for Faith and Fate: Limits of Transformers on Compositionality
Figure 3 for Faith and Fate: Limits of Transformers on Compositionality
Figure 4 for Faith and Fate: Limits of Transformers on Compositionality
Viaarxiv icon

Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play Multi-Character Belief Tracker

Add code
Bookmark button
Alert button
Jun 01, 2023
Melanie Sclar, Sachin Kumar, Peter West, Alane Suhr, Yejin Choi, Yulia Tsvetkov

Figure 1 for Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play Multi-Character Belief Tracker
Figure 2 for Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play Multi-Character Belief Tracker
Figure 3 for Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play Multi-Character Belief Tracker
Figure 4 for Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play Multi-Character Belief Tracker
Viaarxiv icon

PlaSma: Making Small Language Models Better Procedural Knowledge Models for (Counterfactual) Planning

Add code
Bookmark button
Alert button
May 31, 2023
Faeze Brahman, Chandra Bhagavatula, Valentina Pyatkin, Jena D. Hwang, Xiang Lorraine Li, Hirona J. Arai, Soumya Sanyal, Keisuke Sakaguchi, Xiang Ren, Yejin Choi

Figure 1 for PlaSma: Making Small Language Models Better Procedural Knowledge Models for (Counterfactual) Planning
Figure 2 for PlaSma: Making Small Language Models Better Procedural Knowledge Models for (Counterfactual) Planning
Figure 3 for PlaSma: Making Small Language Models Better Procedural Knowledge Models for (Counterfactual) Planning
Figure 4 for PlaSma: Making Small Language Models Better Procedural Knowledge Models for (Counterfactual) Planning
Viaarxiv icon

SQuARe: A Large-Scale Dataset of Sensitive Questions and Acceptable Responses Created Through Human-Machine Collaboration

Add code
Bookmark button
Alert button
May 28, 2023
Hwaran Lee, Seokhee Hong, Joonsuk Park, Takyoung Kim, Meeyoung Cha, Yejin Choi, Byoung Pil Kim, Gunhee Kim, Eun-Ju Lee, Yong Lim, Alice Oh, Sangchul Park, Jung-Woo Ha

Figure 1 for SQuARe: A Large-Scale Dataset of Sensitive Questions and Acceptable Responses Created Through Human-Machine Collaboration
Figure 2 for SQuARe: A Large-Scale Dataset of Sensitive Questions and Acceptable Responses Created Through Human-Machine Collaboration
Figure 3 for SQuARe: A Large-Scale Dataset of Sensitive Questions and Acceptable Responses Created Through Human-Machine Collaboration
Figure 4 for SQuARe: A Large-Scale Dataset of Sensitive Questions and Acceptable Responses Created Through Human-Machine Collaboration
Viaarxiv icon

SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks

Add code
Bookmark button
Alert button
May 27, 2023
Bill Yuchen Lin, Yicheng Fu, Karina Yang, Prithviraj Ammanabrolu, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Yejin Choi, Xiang Ren

Figure 1 for SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks
Figure 2 for SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks
Figure 3 for SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks
Figure 4 for SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks
Viaarxiv icon

From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models

Add code
Bookmark button
Alert button
May 26, 2023
Julia Mendelsohn, Ronan Le Bras, Yejin Choi, Maarten Sap

Figure 1 for From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models
Figure 2 for From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models
Figure 3 for From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models
Figure 4 for From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models
Viaarxiv icon

Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing

Add code
Bookmark button
Alert button
May 26, 2023
Jaehun Jung, Peter West, Liwei Jiang, Faeze Brahman, Ximing Lu, Jillian Fisher, Taylor Sorensen, Yejin Choi

Figure 1 for Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Figure 2 for Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Figure 3 for Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Figure 4 for Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing
Viaarxiv icon