Picture for David Jurgens

David Jurgens

Shammie

The Role of Network and Identity in the Diffusion of Hashtags

Add code
Jul 17, 2024
Viaarxiv icon

ValueScope: Unveiling Implicit Norms and Values via Return Potential Model of Social Interactions

Add code
Jul 02, 2024
Figure 1 for ValueScope: Unveiling Implicit Norms and Values via Return Potential Model of Social Interactions
Figure 2 for ValueScope: Unveiling Implicit Norms and Values via Return Potential Model of Social Interactions
Figure 3 for ValueScope: Unveiling Implicit Norms and Values via Return Potential Model of Social Interactions
Figure 4 for ValueScope: Unveiling Implicit Norms and Values via Return Potential Model of Social Interactions
Viaarxiv icon

Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions

Add code
Jun 17, 2024
Viaarxiv icon

A Multilingual Similarity Dataset for News Article Frame

Add code
May 22, 2024
Viaarxiv icon

The Call for Socially Aware Language Technologies

Add code
May 03, 2024
Viaarxiv icon

Modeling Empathetic Alignment in Conversation

Add code
May 02, 2024
Viaarxiv icon

Global News Synchrony and Diversity During the Start of the COVID-19 Pandemic

Add code
May 01, 2024
Viaarxiv icon

When it Rains, it Pours: Modeling Media Storms and the News Ecosystem

Add code
Dec 04, 2023
Viaarxiv icon

Is "A Helpful Assistant" the Best Role for Large Language Models? A Systematic Evaluation of Social Roles in System Prompts

Add code
Nov 16, 2023
Figure 1 for Is "A Helpful Assistant" the Best Role for Large Language Models? A Systematic Evaluation of Social Roles in System Prompts
Figure 2 for Is "A Helpful Assistant" the Best Role for Large Language Models? A Systematic Evaluation of Social Roles in System Prompts
Figure 3 for Is "A Helpful Assistant" the Best Role for Large Language Models? A Systematic Evaluation of Social Roles in System Prompts
Figure 4 for Is "A Helpful Assistant" the Best Role for Large Language Models? A Systematic Evaluation of Social Roles in System Prompts
Viaarxiv icon

Aligning with Whom? Large Language Models Have Gender and Racial Biases in Subjective NLP Tasks

Add code
Nov 16, 2023
Figure 1 for Aligning with Whom? Large Language Models Have Gender and Racial Biases in Subjective NLP Tasks
Figure 2 for Aligning with Whom? Large Language Models Have Gender and Racial Biases in Subjective NLP Tasks
Figure 3 for Aligning with Whom? Large Language Models Have Gender and Racial Biases in Subjective NLP Tasks
Figure 4 for Aligning with Whom? Large Language Models Have Gender and Racial Biases in Subjective NLP Tasks
Viaarxiv icon