Alert button
Picture for Hendrik Schuff

Hendrik Schuff

Alert button

Explaining Pre-Trained Language Models with Attribution Scores: An Analysis in Low-Resource Settings

Mar 08, 2024
Wei Zhou, Heike Adel, Hendrik Schuff, Ngoc Thang Vu

Figure 1 for Explaining Pre-Trained Language Models with Attribution Scores: An Analysis in Low-Resource Settings
Figure 2 for Explaining Pre-Trained Language Models with Attribution Scores: An Analysis in Low-Resource Settings
Figure 3 for Explaining Pre-Trained Language Models with Attribution Scores: An Analysis in Low-Resource Settings
Figure 4 for Explaining Pre-Trained Language Models with Attribution Scores: An Analysis in Low-Resource Settings
Viaarxiv icon

How are Prompts Different in Terms of Sensitivity?

Nov 13, 2023
Sheng Lu, Hendrik Schuff, Iryna Gurevych

Viaarxiv icon

How (Not) to Use Sociodemographic Information for Subjective NLP Tasks

Sep 13, 2023
Tilman Beck, Hendrik Schuff, Anne Lauscher, Iryna Gurevych

Figure 1 for How (Not) to Use Sociodemographic Information for Subjective NLP Tasks
Figure 2 for How (Not) to Use Sociodemographic Information for Subjective NLP Tasks
Figure 3 for How (Not) to Use Sociodemographic Information for Subjective NLP Tasks
Figure 4 for How (Not) to Use Sociodemographic Information for Subjective NLP Tasks
Viaarxiv icon

Neighboring Words Affect Human Interpretation of Saliency Explanations

May 06, 2023
Alon Jacovi, Hendrik Schuff, Heike Adel, Ngoc Thang Vu, Yoav Goldberg

Figure 1 for Neighboring Words Affect Human Interpretation of Saliency Explanations
Figure 2 for Neighboring Words Affect Human Interpretation of Saliency Explanations
Figure 3 for Neighboring Words Affect Human Interpretation of Saliency Explanations
Figure 4 for Neighboring Words Affect Human Interpretation of Saliency Explanations
Viaarxiv icon

How (Not) To Evaluate Explanation Quality

Oct 13, 2022
Hendrik Schuff, Heike Adel, Peng Qi, Ngoc Thang Vu

Figure 1 for How (Not) To Evaluate Explanation Quality
Figure 2 for How (Not) To Evaluate Explanation Quality
Figure 3 for How (Not) To Evaluate Explanation Quality
Figure 4 for How (Not) To Evaluate Explanation Quality
Viaarxiv icon

Human Interpretation of Saliency-based Explanation Over Text

Jan 27, 2022
Hendrik Schuff, Alon Jacovi, Heike Adel, Yoav Goldberg, Ngoc Thang Vu

Figure 1 for Human Interpretation of Saliency-based Explanation Over Text
Figure 2 for Human Interpretation of Saliency-based Explanation Over Text
Figure 3 for Human Interpretation of Saliency-based Explanation Over Text
Figure 4 for Human Interpretation of Saliency-based Explanation Over Text
Viaarxiv icon

Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings

Oct 13, 2021
Hendrik Schuff, Hsiu-Yu Yang, Heike Adel, Ngoc Thang Vu

Figure 1 for Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings
Figure 2 for Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings
Figure 3 for Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings
Figure 4 for Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings
Viaarxiv icon

Thought Flow Nets: From Single Predictions to Trains of Model Thought

Jul 26, 2021
Hendrik Schuff, Heike Adel, Ngoc Thang Vu

Figure 1 for Thought Flow Nets: From Single Predictions to Trains of Model Thought
Figure 2 for Thought Flow Nets: From Single Predictions to Trains of Model Thought
Figure 3 for Thought Flow Nets: From Single Predictions to Trains of Model Thought
Figure 4 for Thought Flow Nets: From Single Predictions to Trains of Model Thought
Viaarxiv icon