Alert button
Picture for George Chrysostomou

George Chrysostomou

Alert button

Lighter, yet More Faithful: Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization

Add code
Bookmark button
Alert button
Nov 15, 2023
George Chrysostomou, Zhixue Zhao, Miles Williams, Nikolaos Aletras

Viaarxiv icon

On the Impact of Temporal Concept Drift on Model Explanations

Add code
Bookmark button
Alert button
Oct 17, 2022
Zhixue Zhao, George Chrysostomou, Kalina Bontcheva, Nikolaos Aletras

Figure 1 for On the Impact of Temporal Concept Drift on Model Explanations
Figure 2 for On the Impact of Temporal Concept Drift on Model Explanations
Figure 3 for On the Impact of Temporal Concept Drift on Model Explanations
Figure 4 for On the Impact of Temporal Concept Drift on Model Explanations
Viaarxiv icon

An Empirical Study on Explanations in Out-of-Domain Settings

Add code
Bookmark button
Alert button
Feb 28, 2022
George Chrysostomou, Nikolaos Aletras

Figure 1 for An Empirical Study on Explanations in Out-of-Domain Settings
Figure 2 for An Empirical Study on Explanations in Out-of-Domain Settings
Figure 3 for An Empirical Study on Explanations in Out-of-Domain Settings
Figure 4 for An Empirical Study on Explanations in Out-of-Domain Settings
Viaarxiv icon

Frustratingly Simple Pretraining Alternatives to Masked Language Modeling

Add code
Bookmark button
Alert button
Sep 04, 2021
Atsuki Yamaguchi, George Chrysostomou, Katerina Margatina, Nikolaos Aletras

Figure 1 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Figure 2 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Figure 3 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Figure 4 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Viaarxiv icon

Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience

Add code
Bookmark button
Alert button
Aug 31, 2021
George Chrysostomou, Nikolaos Aletras

Figure 1 for Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
Figure 2 for Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
Figure 3 for Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
Figure 4 for Enjoy the Salience: Towards Better Transformer-based Faithful Explanations with Word Salience
Viaarxiv icon

Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification

Add code
Bookmark button
Alert button
May 07, 2021
George Chrysostomou, Nikolaos Aletras

Figure 1 for Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
Figure 2 for Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
Figure 3 for Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
Figure 4 for Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
Viaarxiv icon

Variable Instance-Level Explainability for Text Classification

Add code
Bookmark button
Alert button
Apr 16, 2021
George Chrysostomou, Nikolaos Aletras

Figure 1 for Variable Instance-Level Explainability for Text Classification
Figure 2 for Variable Instance-Level Explainability for Text Classification
Figure 3 for Variable Instance-Level Explainability for Text Classification
Figure 4 for Variable Instance-Level Explainability for Text Classification
Viaarxiv icon