Alert button
Picture for Melanie Subbiah

Melanie Subbiah

Alert button

Reading Subtext: Evaluating Large Language Models on Short Story Summarization with Writers

Add code
Bookmark button
Alert button
Mar 02, 2024
Melanie Subbiah, Sean Zhang, Lydia B. Chilton, Kathleen McKeown

Figure 1 for Reading Subtext: Evaluating Large Language Models on Short Story Summarization with Writers
Figure 2 for Reading Subtext: Evaluating Large Language Models on Short Story Summarization with Writers
Figure 3 for Reading Subtext: Evaluating Large Language Models on Short Story Summarization with Writers
Figure 4 for Reading Subtext: Evaluating Large Language Models on Short Story Summarization with Writers
Viaarxiv icon

Check-COVID: Fact-Checking COVID-19 News Claims with Scientific Evidence

Add code
Bookmark button
Alert button
May 29, 2023
Gengyu Wang, Kate Harwood, Lawrence Chillrud, Amith Ananthram, Melanie Subbiah, Kathleen McKeown

Figure 1 for Check-COVID: Fact-Checking COVID-19 News Claims with Scientific Evidence
Figure 2 for Check-COVID: Fact-Checking COVID-19 News Claims with Scientific Evidence
Figure 3 for Check-COVID: Fact-Checking COVID-19 News Claims with Scientific Evidence
Figure 4 for Check-COVID: Fact-Checking COVID-19 News Claims with Scientific Evidence
Viaarxiv icon

Unsupervised Selective Rationalization with Noise Injection

Add code
Bookmark button
Alert button
May 27, 2023
Adam Storek, Melanie Subbiah, Kathleen McKeown

Figure 1 for Unsupervised Selective Rationalization with Noise Injection
Figure 2 for Unsupervised Selective Rationalization with Noise Injection
Figure 3 for Unsupervised Selective Rationalization with Noise Injection
Figure 4 for Unsupervised Selective Rationalization with Noise Injection
Viaarxiv icon

Detecting Harmful Agendas in News Articles

Add code
Bookmark button
Alert button
Jan 31, 2023
Melanie Subbiah, Amrita Bhattacharjee, Bobby Yilun Hua, Tharindu Kumarage, Huan Liu, Kathleen McKeown

Figure 1 for Detecting Harmful Agendas in News Articles
Figure 2 for Detecting Harmful Agendas in News Articles
Figure 3 for Detecting Harmful Agendas in News Articles
Figure 4 for Detecting Harmful Agendas in News Articles
Viaarxiv icon

SafeText: A Benchmark for Exploring Physical Safety in Language Models

Add code
Bookmark button
Alert button
Oct 18, 2022
Sharon Levy, Emily Allaway, Melanie Subbiah, Lydia Chilton, Desmond Patton, Kathleen McKeown, William Yang Wang

Figure 1 for SafeText: A Benchmark for Exploring Physical Safety in Language Models
Figure 2 for SafeText: A Benchmark for Exploring Physical Safety in Language Models
Figure 3 for SafeText: A Benchmark for Exploring Physical Safety in Language Models
Figure 4 for SafeText: A Benchmark for Exploring Physical Safety in Language Models
Viaarxiv icon

Mitigating Covertly Unsafe Text within Natural Language Systems

Add code
Bookmark button
Alert button
Oct 17, 2022
Alex Mei, Anisha Kabir, Sharon Levy, Melanie Subbiah, Emily Allaway, John Judge, Desmond Patton, Bruce Bimber, Kathleen McKeown, William Yang Wang

Figure 1 for Mitigating Covertly Unsafe Text within Natural Language Systems
Figure 2 for Mitigating Covertly Unsafe Text within Natural Language Systems
Figure 3 for Mitigating Covertly Unsafe Text within Natural Language Systems
Figure 4 for Mitigating Covertly Unsafe Text within Natural Language Systems
Viaarxiv icon

Language Models are Few-Shot Learners

Add code
Bookmark button
Alert button
Jun 05, 2020
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei

Figure 1 for Language Models are Few-Shot Learners
Figure 2 for Language Models are Few-Shot Learners
Figure 3 for Language Models are Few-Shot Learners
Figure 4 for Language Models are Few-Shot Learners
Viaarxiv icon