Picture for Libby Hemphill

Libby Hemphill

Evaluating how LLM annotations represent diverse views on contentious topics

Add code
Mar 29, 2025
Figure 1 for Evaluating how LLM annotations represent diverse views on contentious topics
Figure 2 for Evaluating how LLM annotations represent diverse views on contentious topics
Figure 3 for Evaluating how LLM annotations represent diverse views on contentious topics
Figure 4 for Evaluating how LLM annotations represent diverse views on contentious topics
Viaarxiv icon

Characterizing Online Toxicity During the 2022 Mpox Outbreak: A Computational Analysis of Topical and Network Dynamics

Add code
Aug 21, 2024
Figure 1 for Characterizing Online Toxicity During the 2022 Mpox Outbreak: A Computational Analysis of Topical and Network Dynamics
Figure 2 for Characterizing Online Toxicity During the 2022 Mpox Outbreak: A Computational Analysis of Topical and Network Dynamics
Figure 3 for Characterizing Online Toxicity During the 2022 Mpox Outbreak: A Computational Analysis of Topical and Network Dynamics
Figure 4 for Characterizing Online Toxicity During the 2022 Mpox Outbreak: A Computational Analysis of Topical and Network Dynamics
Viaarxiv icon

Prompt Design Matters for Computational Social Science Tasks but in Unpredictable Ways

Add code
Jun 17, 2024
Figure 1 for Prompt Design Matters for Computational Social Science Tasks but in Unpredictable Ways
Figure 2 for Prompt Design Matters for Computational Social Science Tasks but in Unpredictable Ways
Figure 3 for Prompt Design Matters for Computational Social Science Tasks but in Unpredictable Ways
Figure 4 for Prompt Design Matters for Computational Social Science Tasks but in Unpredictable Ways
Viaarxiv icon

War and Peace : Large Language Model-based Multi-Agent Simulation of World Wars

Add code
Nov 28, 2023
Viaarxiv icon

How We Define Harm Impacts Data Annotations: Explaining How Annotators Distinguish Hateful, Offensive, and Toxic Comments

Add code
Sep 12, 2023
Figure 1 for How We Define Harm Impacts Data Annotations: Explaining How Annotators Distinguish Hateful, Offensive, and Toxic Comments
Figure 2 for How We Define Harm Impacts Data Annotations: Explaining How Annotators Distinguish Hateful, Offensive, and Toxic Comments
Figure 3 for How We Define Harm Impacts Data Annotations: Explaining How Annotators Distinguish Hateful, Offensive, and Toxic Comments
Figure 4 for How We Define Harm Impacts Data Annotations: Explaining How Annotators Distinguish Hateful, Offensive, and Toxic Comments
Viaarxiv icon

Investigating disaster response through social media data and the Susceptible-Infected-Recovered (SIR) model: A case study of 2020 Western U.S. wildfire season

Add code
Aug 10, 2023
Figure 1 for Investigating disaster response through social media data and the Susceptible-Infected-Recovered (SIR) model: A case study of 2020 Western U.S. wildfire season
Figure 2 for Investigating disaster response through social media data and the Susceptible-Infected-Recovered (SIR) model: A case study of 2020 Western U.S. wildfire season
Figure 3 for Investigating disaster response through social media data and the Susceptible-Infected-Recovered (SIR) model: A case study of 2020 Western U.S. wildfire season
Figure 4 for Investigating disaster response through social media data and the Susceptible-Infected-Recovered (SIR) model: A case study of 2020 Western U.S. wildfire season
Viaarxiv icon

DataChat: Prototyping a Conversational Agent for Dataset Search and Visualization

Add code
May 26, 2023
Viaarxiv icon

"HOT" ChatGPT: The promise of ChatGPT in detecting and discriminating hateful, offensive, and toxic comments on social media

Add code
Apr 20, 2023
Figure 1 for "HOT" ChatGPT: The promise of ChatGPT in detecting and discriminating hateful, offensive, and toxic comments on social media
Figure 2 for "HOT" ChatGPT: The promise of ChatGPT in detecting and discriminating hateful, offensive, and toxic comments on social media
Figure 3 for "HOT" ChatGPT: The promise of ChatGPT in detecting and discriminating hateful, offensive, and toxic comments on social media
Figure 4 for "HOT" ChatGPT: The promise of ChatGPT in detecting and discriminating hateful, offensive, and toxic comments on social media
Viaarxiv icon

A Bibliometric Review of Large Language Models Research from 2017 to 2023

Add code
Apr 03, 2023
Figure 1 for A Bibliometric Review of Large Language Models Research from 2017 to 2023
Figure 2 for A Bibliometric Review of Large Language Models Research from 2017 to 2023
Figure 3 for A Bibliometric Review of Large Language Models Research from 2017 to 2023
Figure 4 for A Bibliometric Review of Large Language Models Research from 2017 to 2023
Viaarxiv icon

A Natural Language Processing Pipeline for Detecting Informal Data References in Academic Literature

Add code
May 23, 2022
Figure 1 for A Natural Language Processing Pipeline for Detecting Informal Data References in Academic Literature
Figure 2 for A Natural Language Processing Pipeline for Detecting Informal Data References in Academic Literature
Figure 3 for A Natural Language Processing Pipeline for Detecting Informal Data References in Academic Literature
Figure 4 for A Natural Language Processing Pipeline for Detecting Informal Data References in Academic Literature
Viaarxiv icon