Picture for Scott A. Hale

Scott A. Hale

Framing Migration: A Computational Analysis of UK Parliamentary Discourse

Add code
Sep 17, 2025
Viaarxiv icon

AI-Powered Detection of Inappropriate Language in Medical School Curricula

Add code
Aug 27, 2025
Viaarxiv icon

Why human-AI relationships need socioaffective alignment

Add code
Feb 04, 2025
Viaarxiv icon

HateDay: Insights from a Global Hate Speech Dataset Representative of a Day on Twitter

Add code
Nov 23, 2024
Figure 1 for HateDay: Insights from a Global Hate Speech Dataset Representative of a Day on Twitter
Figure 2 for HateDay: Insights from a Global Hate Speech Dataset Representative of a Day on Twitter
Figure 3 for HateDay: Insights from a Global Hate Speech Dataset Representative of a Day on Twitter
Figure 4 for HateDay: Insights from a Global Hate Speech Dataset Representative of a Day on Twitter
Viaarxiv icon

LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages

Add code
Jun 11, 2024
Viaarxiv icon

SynDy: Synthetic Dynamic Dataset Generation Framework for Misinformation Tasks

Add code
May 17, 2024
Figure 1 for SynDy: Synthetic Dynamic Dataset Generation Framework for Misinformation Tasks
Figure 2 for SynDy: Synthetic Dynamic Dataset Generation Framework for Misinformation Tasks
Figure 3 for SynDy: Synthetic Dynamic Dataset Generation Framework for Misinformation Tasks
Viaarxiv icon

Global News Synchrony and Diversity During the Start of the COVID-19 Pandemic

Add code
May 01, 2024
Figure 1 for Global News Synchrony and Diversity During the Start of the COVID-19 Pandemic
Figure 2 for Global News Synchrony and Diversity During the Start of the COVID-19 Pandemic
Figure 3 for Global News Synchrony and Diversity During the Start of the COVID-19 Pandemic
Figure 4 for Global News Synchrony and Diversity During the Start of the COVID-19 Pandemic
Viaarxiv icon

From Languages to Geographies: Towards Evaluating Cultural Bias in Hate Speech Datasets

Add code
Apr 27, 2024
Viaarxiv icon

The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models

Add code
Apr 24, 2024
Figure 1 for The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models
Figure 2 for The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models
Figure 3 for The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models
Figure 4 for The PRISM Alignment Project: What Participatory, Representative and Individualised Human Feedback Reveals About the Subjective and Multicultural Alignment of Large Language Models
Viaarxiv icon

Introducing v0.5 of the AI Safety Benchmark from MLCommons

Add code
Apr 18, 2024
Figure 1 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Figure 2 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Figure 3 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Figure 4 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Viaarxiv icon