Alert button
Picture for Mennatallah El-Assady

Mennatallah El-Assady

Alert button

generAItor: Tree-in-the-Loop Text Generation for Language Model Explainability and Adaptation

Add code
Bookmark button
Alert button
Mar 12, 2024
Thilo Spinner, Rebecca Kehlbeck, Rita Sevastjanova, Tobias Stähle, Daniel A. Keim, Oliver Deussen, Mennatallah El-Assady

Figure 1 for generAItor: Tree-in-the-Loop Text Generation for Language Model Explainability and Adaptation
Figure 2 for generAItor: Tree-in-the-Loop Text Generation for Language Model Explainability and Adaptation
Figure 3 for generAItor: Tree-in-the-Loop Text Generation for Language Model Explainability and Adaptation
Figure 4 for generAItor: Tree-in-the-Loop Text Generation for Language Model Explainability and Adaptation
Viaarxiv icon

SyntaxShap: Syntax-aware Explainability Method for Text Generation

Add code
Bookmark button
Alert button
Feb 14, 2024
Kenza Amara, Rita Sevastjanova, Mennatallah El-Assady

Viaarxiv icon

RELIC: Investigating Large Language Model Responses using Self-Consistency

Add code
Bookmark button
Alert button
Nov 28, 2023
Furui Cheng, Vilém Zouhar, Simran Arora, Mrinmaya Sachan, Hendrik Strobelt, Mennatallah El-Assady

Viaarxiv icon

A Diachronic Perspective on User Trust in AI under Uncertainty

Add code
Bookmark button
Alert button
Oct 20, 2023
Shehzaad Dhuliawala, Vilém Zouhar, Mennatallah El-Assady, Mrinmaya Sachan

Viaarxiv icon

Revealing the Unwritten: Visual Investigation of Beam Search Trees to Address Language Model Prompting Challenges

Add code
Bookmark button
Alert button
Oct 17, 2023
Thilo Spinner, Rebecca Kehlbeck, Rita Sevastjanova, Tobias Stähle, Daniel A. Keim, Oliver Deussen, Andreas Spitz, Mennatallah El-Assady

Viaarxiv icon

GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations

Add code
Bookmark button
Alert button
Sep 28, 2023
Kenza Amara, Mennatallah El-Assady, Rex Ying

Figure 1 for GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations
Figure 2 for GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations
Figure 3 for GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations
Figure 4 for GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network Explanations
Viaarxiv icon

RLHF-Blender: A Configurable Interactive Interface for Learning from Diverse Human Feedback

Add code
Bookmark button
Alert button
Aug 08, 2023
Yannick Metz, David Lindner, Raphaël Baur, Daniel Keim, Mennatallah El-Assady

Figure 1 for RLHF-Blender: A Configurable Interactive Interface for Learning from Diverse Human Feedback
Figure 2 for RLHF-Blender: A Configurable Interactive Interface for Learning from Diverse Human Feedback
Viaarxiv icon

Visual Explanations with Attributions and Counterfactuals on Time Series Classification

Add code
Bookmark button
Alert button
Jul 14, 2023
Udo Schlegel, Daniela Oelke, Daniel A. Keim, Mennatallah El-Assady

Figure 1 for Visual Explanations with Attributions and Counterfactuals on Time Series Classification
Figure 2 for Visual Explanations with Attributions and Counterfactuals on Time Series Classification
Figure 3 for Visual Explanations with Attributions and Counterfactuals on Time Series Classification
Figure 4 for Visual Explanations with Attributions and Counterfactuals on Time Series Classification
Viaarxiv icon

Which Spurious Correlations Impact Reasoning in NLI Models? A Visual Interactive Diagnosis through Data-Constrained Counterfactuals

Add code
Bookmark button
Alert button
Jun 21, 2023
Robin Chan, Afra Amini, Mennatallah El-Assady

Figure 1 for Which Spurious Correlations Impact Reasoning in NLI Models? A Visual Interactive Diagnosis through Data-Constrained Counterfactuals
Figure 2 for Which Spurious Correlations Impact Reasoning in NLI Models? A Visual Interactive Diagnosis through Data-Constrained Counterfactuals
Viaarxiv icon

Improving Explainability of Disentangled Representations using Multipath-Attribution Mappings

Add code
Bookmark button
Alert button
Jun 15, 2023
Lukas Klein, João B. S. Carvalho, Mennatallah El-Assady, Paolo Penna, Joachim M. Buhmann, Paul F. Jaeger

Figure 1 for Improving Explainability of Disentangled Representations using Multipath-Attribution Mappings
Figure 2 for Improving Explainability of Disentangled Representations using Multipath-Attribution Mappings
Figure 3 for Improving Explainability of Disentangled Representations using Multipath-Attribution Mappings
Figure 4 for Improving Explainability of Disentangled Representations using Multipath-Attribution Mappings
Viaarxiv icon