Picture for H. Andrew Schwartz

H. Andrew Schwartz

SOCIALITE-LLAMA: An Instruction-Tuned Model for Social Scientific Tasks

Add code
Feb 03, 2024
Viaarxiv icon

Comparing Human-Centered Language Modeling: Is it Better to Model Groups, Individual Traits, or Both?

Add code
Jan 23, 2024
Viaarxiv icon

Adaptive Language-based Mental Health Assessment with Item-Response Theory

Add code
Nov 11, 2023
Viaarxiv icon

Systematic Evaluation of GPT-3 for Zero-Shot Personality Estimation

Add code
Jun 01, 2023
Viaarxiv icon

Human-Centered Metrics for Dialog System Evaluation

Add code
May 24, 2023
Viaarxiv icon

Transfer and Active Learning for Dissonance Detection: Addressing the Rare-Class Challenge

Add code
May 05, 2023
Viaarxiv icon

Robust language-based mental health assessments in time and space through social media

Add code
Feb 25, 2023
Viaarxiv icon

Human Language Modeling

Add code
May 10, 2022
Figure 1 for Human Language Modeling
Figure 2 for Human Language Modeling
Figure 3 for Human Language Modeling
Figure 4 for Human Language Modeling
Viaarxiv icon

Understanding RoBERTa's Mood: The Role of Contextual-Embeddings as User-Representations for Depression Prediction

Add code
Dec 27, 2021
Figure 1 for Understanding RoBERTa's Mood: The Role of Contextual-Embeddings as User-Representations for Depression Prediction
Figure 2 for Understanding RoBERTa's Mood: The Role of Contextual-Embeddings as User-Representations for Depression Prediction
Figure 3 for Understanding RoBERTa's Mood: The Role of Contextual-Embeddings as User-Representations for Depression Prediction
Figure 4 for Understanding RoBERTa's Mood: The Role of Contextual-Embeddings as User-Representations for Depression Prediction
Viaarxiv icon

MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection

Add code
Sep 16, 2021
Figure 1 for MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection
Figure 2 for MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection
Figure 3 for MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection
Figure 4 for MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection
Viaarxiv icon