Alert button
Picture for Melanie Sclar

Melanie Sclar

Alert button

The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning

Add code
Bookmark button
Alert button
Dec 04, 2023
Bill Yuchen Lin, Abhilasha Ravichander, Ximing Lu, Nouha Dziri, Melanie Sclar, Khyathi Chandu, Chandra Bhagavatula, Yejin Choi

Figure 1 for The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning
Figure 2 for The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning
Figure 3 for The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning
Figure 4 for The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning
Viaarxiv icon

FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions

Add code
Bookmark button
Alert button
Oct 31, 2023
Hyunwoo Kim, Melanie Sclar, Xuhui Zhou, Ronan Le Bras, Gunhee Kim, Yejin Choi, Maarten Sap

Figure 1 for FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions
Figure 2 for FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions
Figure 3 for FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions
Figure 4 for FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions
Viaarxiv icon

Quantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting

Add code
Bookmark button
Alert button
Oct 17, 2023
Melanie Sclar, Yejin Choi, Yulia Tsvetkov, Alane Suhr

Viaarxiv icon

Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement

Add code
Bookmark button
Alert button
Oct 12, 2023
Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, Xiang Ren

Figure 1 for Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
Figure 2 for Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
Figure 3 for Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
Figure 4 for Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement
Viaarxiv icon

Faith and Fate: Limits of Transformers on Compositionality

Add code
Bookmark button
Alert button
Jun 01, 2023
Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, Yejin Choi

Figure 1 for Faith and Fate: Limits of Transformers on Compositionality
Figure 2 for Faith and Fate: Limits of Transformers on Compositionality
Figure 3 for Faith and Fate: Limits of Transformers on Compositionality
Figure 4 for Faith and Fate: Limits of Transformers on Compositionality
Viaarxiv icon

Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play Multi-Character Belief Tracker

Add code
Bookmark button
Alert button
Jun 01, 2023
Melanie Sclar, Sachin Kumar, Peter West, Alane Suhr, Yejin Choi, Yulia Tsvetkov

Figure 1 for Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play Multi-Character Belief Tracker
Figure 2 for Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play Multi-Character Belief Tracker
Figure 3 for Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play Multi-Character Belief Tracker
Figure 4 for Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play Multi-Character Belief Tracker
Viaarxiv icon

Referee: Reference-Free Sentence Summarization with Sharper Controllability through Symbolic Knowledge Distillation

Add code
Bookmark button
Alert button
Oct 25, 2022
Melanie Sclar, Peter West, Sachin Kumar, Yulia Tsvetkov, Yejin Choi

Figure 1 for Referee: Reference-Free Sentence Summarization with Sharper Controllability through Symbolic Knowledge Distillation
Figure 2 for Referee: Reference-Free Sentence Summarization with Sharper Controllability through Symbolic Knowledge Distillation
Figure 3 for Referee: Reference-Free Sentence Summarization with Sharper Controllability through Symbolic Knowledge Distillation
Figure 4 for Referee: Reference-Free Sentence Summarization with Sharper Controllability through Symbolic Knowledge Distillation
Viaarxiv icon