Alert button
Picture for Matthew Matero

Matthew Matero

Alert button

SOCIALITE-LLAMA: An Instruction-Tuned Model for Social Scientific Tasks

Add code
Bookmark button
Alert button
Feb 03, 2024
Gourab Dey, Adithya V Ganesan, Yash Kumar Lal, Manal Shah, Shreyashee Sinha, Matthew Matero, Salvatore Giorgi, Vivek Kulkarni, H. Andrew Schwartz

Viaarxiv icon

Human Language Modeling

Add code
Bookmark button
Alert button
May 10, 2022
Nikita Soni, Matthew Matero, Niranjan Balasubramanian, H. Andrew Schwartz

Figure 1 for Human Language Modeling
Figure 2 for Human Language Modeling
Figure 3 for Human Language Modeling
Figure 4 for Human Language Modeling
Viaarxiv icon

Understanding RoBERTa's Mood: The Role of Contextual-Embeddings as User-Representations for Depression Prediction

Add code
Bookmark button
Alert button
Dec 27, 2021
Matthew Matero, Albert Hung, H. Andrew Schwartz

Figure 1 for Understanding RoBERTa's Mood: The Role of Contextual-Embeddings as User-Representations for Depression Prediction
Figure 2 for Understanding RoBERTa's Mood: The Role of Contextual-Embeddings as User-Representations for Depression Prediction
Figure 3 for Understanding RoBERTa's Mood: The Role of Contextual-Embeddings as User-Representations for Depression Prediction
Figure 4 for Understanding RoBERTa's Mood: The Role of Contextual-Embeddings as User-Representations for Depression Prediction
Viaarxiv icon

MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection

Add code
Bookmark button
Alert button
Sep 16, 2021
Matthew Matero, Nikita Soni, Niranjan Balasubramanian, H. Andrew Schwartz

Figure 1 for MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection
Figure 2 for MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection
Figure 3 for MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection
Figure 4 for MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection
Viaarxiv icon

Empirical Evaluation of Pre-trained Transformers for Human-Level NLP: The Role of Sample Size and Dimensionality

Add code
Bookmark button
Alert button
May 07, 2021
Adithya V Ganesan, Matthew Matero, Aravind Reddy Ravula, Huy Vu, H. Andrew Schwartz

Figure 1 for Empirical Evaluation of Pre-trained Transformers for Human-Level NLP: The Role of Sample Size and Dimensionality
Figure 2 for Empirical Evaluation of Pre-trained Transformers for Human-Level NLP: The Role of Sample Size and Dimensionality
Figure 3 for Empirical Evaluation of Pre-trained Transformers for Human-Level NLP: The Role of Sample Size and Dimensionality
Figure 4 for Empirical Evaluation of Pre-trained Transformers for Human-Level NLP: The Role of Sample Size and Dimensionality
Viaarxiv icon