Alert button
Picture for Ronan Le Bras

Ronan Le Bras

Alert button

NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation

Add code
Bookmark button
Alert button
Dec 10, 2023
Peter West, Ronan Le Bras, Taylor Sorensen, Bill Yuchen Lin, Liwei Jiang, Ximing Lu, Khyathi Chandu, Jack Hessel, Ashutosh Baheti, Chandra Bhagavatula, Yejin Choi

Figure 1 for NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation
Figure 2 for NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation
Figure 3 for NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation
Figure 4 for NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge Distillation
Viaarxiv icon

MacGyver: Are Large Language Models Creative Problem Solvers?

Add code
Bookmark button
Alert button
Nov 16, 2023
Yufei Tian, Abhilasha Ravichander, Lianhui Qin, Ronan Le Bras, Raja Marjieh, Nanyun Peng, Yejin Choi, Thomas L. Griffiths, Faeze Brahman

Viaarxiv icon

FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions

Add code
Bookmark button
Alert button
Oct 31, 2023
Hyunwoo Kim, Melanie Sclar, Xuhui Zhou, Ronan Le Bras, Gunhee Kim, Yejin Choi, Maarten Sap

Figure 1 for FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions
Figure 2 for FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions
Figure 3 for FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions
Figure 4 for FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions
Viaarxiv icon

Commonsense Knowledge Transfer for Pre-trained Language Models

Add code
Bookmark button
Alert button
Jun 04, 2023
Wangchunshu Zhou, Ronan Le Bras, Yejin Choi

Figure 1 for Commonsense Knowledge Transfer for Pre-trained Language Models
Figure 2 for Commonsense Knowledge Transfer for Pre-trained Language Models
Figure 3 for Commonsense Knowledge Transfer for Pre-trained Language Models
Figure 4 for Commonsense Knowledge Transfer for Pre-trained Language Models
Viaarxiv icon

Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference

Add code
Bookmark button
Alert button
Jun 04, 2023
Wangchunshu Zhou, Ronan Le Bras, Yejin Choi

Figure 1 for Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference
Figure 2 for Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference
Figure 3 for Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference
Figure 4 for Modular Transformers: Compressing Transformers into Modularized Layers for Flexible Efficient Inference
Viaarxiv icon

NLPositionality: Characterizing Design Biases of Datasets and Models

Add code
Bookmark button
Alert button
Jun 02, 2023
Sebastin Santy, Jenny T. Liang, Ronan Le Bras, Katharina Reinecke, Maarten Sap

Figure 1 for NLPositionality: Characterizing Design Biases of Datasets and Models
Figure 2 for NLPositionality: Characterizing Design Biases of Datasets and Models
Figure 3 for NLPositionality: Characterizing Design Biases of Datasets and Models
Figure 4 for NLPositionality: Characterizing Design Biases of Datasets and Models
Viaarxiv icon

Faith and Fate: Limits of Transformers on Compositionality

Add code
Bookmark button
Alert button
Jun 01, 2023
Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, Yejin Choi

Figure 1 for Faith and Fate: Limits of Transformers on Compositionality
Figure 2 for Faith and Fate: Limits of Transformers on Compositionality
Figure 3 for Faith and Fate: Limits of Transformers on Compositionality
Figure 4 for Faith and Fate: Limits of Transformers on Compositionality
Viaarxiv icon

From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models

Add code
Bookmark button
Alert button
May 26, 2023
Julia Mendelsohn, Ronan Le Bras, Yejin Choi, Maarten Sap

Figure 1 for From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models
Figure 2 for From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models
Figure 3 for From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models
Figure 4 for From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models
Viaarxiv icon

Improving Language Models with Advantage-based Offline Policy Gradients

Add code
Bookmark button
Alert button
May 24, 2023
Ashutosh Baheti, Ximing Lu, Faeze Brahman, Ronan Le Bras, Maarten Sap, Mark Riedl

Figure 1 for Improving Language Models with Advantage-based Offline Policy Gradients
Figure 2 for Improving Language Models with Advantage-based Offline Policy Gradients
Figure 3 for Improving Language Models with Advantage-based Offline Policy Gradients
Figure 4 for Improving Language Models with Advantage-based Offline Policy Gradients
Viaarxiv icon