Picture for William Agnew

William Agnew

Characterizing Delusional Spirals through Human-LLM Chat Logs

Add code
Mar 17, 2026
Viaarxiv icon

How Professional Visual Artists are Negotiating Generative AI in the Workplace

Add code
Mar 04, 2026
Viaarxiv icon

Slurry-as-a-Service: A Modest Proposal on Scalable Pluralistic Alignment for Nutrient Optimization

Add code
Mar 02, 2026
Viaarxiv icon

The Algorithmic Gaze: An Audit and Ethnography of the LAION-Aesthetics Predictor Model

Add code
Jan 14, 2026
Viaarxiv icon

How do data owners say no? A case study of data consent mechanisms in web-scraped vision-language AI training datasets

Add code
Nov 10, 2025
Viaarxiv icon

Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers

Add code
Apr 25, 2025
Viaarxiv icon

The Cake that is Intelligence and Who Gets to Bake it: An AI Analogy and its Implications for Participation

Add code
Feb 06, 2025
Viaarxiv icon

Data Defenses Against Large Language Models

Add code
Oct 17, 2024
Figure 1 for Data Defenses Against Large Language Models
Figure 2 for Data Defenses Against Large Language Models
Figure 3 for Data Defenses Against Large Language Models
Figure 4 for Data Defenses Against Large Language Models
Viaarxiv icon

Sound Check: Auditing Audio Datasets

Add code
Oct 17, 2024
Figure 1 for Sound Check: Auditing Audio Datasets
Figure 2 for Sound Check: Auditing Audio Datasets
Figure 3 for Sound Check: Auditing Audio Datasets
Figure 4 for Sound Check: Auditing Audio Datasets
Viaarxiv icon

'Simulacrum of Stories': Examining Large Language Models as Qualitative Research Participants

Add code
Sep 28, 2024
Viaarxiv icon