Picture for Stefano Teso

Stefano Teso

Perks and Pitfalls of Faithfulness in Regular, Self-Explainable and Domain Invariant GNNs

Add code
Jun 21, 2024
Figure 1 for Perks and Pitfalls of Faithfulness in Regular, Self-Explainable and Domain Invariant GNNs
Figure 2 for Perks and Pitfalls of Faithfulness in Regular, Self-Explainable and Domain Invariant GNNs
Figure 3 for Perks and Pitfalls of Faithfulness in Regular, Self-Explainable and Domain Invariant GNNs
Figure 4 for Perks and Pitfalls of Faithfulness in Regular, Self-Explainable and Domain Invariant GNNs
Viaarxiv icon

A Benchmark Suite for Systematically Evaluating Reasoning Shortcuts

Add code
Jun 14, 2024
Viaarxiv icon

Semantic Loss Functions for Neuro-Symbolic Structured Prediction

Add code
May 12, 2024
Figure 1 for Semantic Loss Functions for Neuro-Symbolic Structured Prediction
Figure 2 for Semantic Loss Functions for Neuro-Symbolic Structured Prediction
Figure 3 for Semantic Loss Functions for Neuro-Symbolic Structured Prediction
Figure 4 for Semantic Loss Functions for Neuro-Symbolic Structured Prediction
Viaarxiv icon

Towards Logically Consistent Language Models via Probabilistic Reasoning

Add code
Apr 19, 2024
Viaarxiv icon

Learning To Guide Human Decision Makers With Vision-Language Models

Add code
Mar 28, 2024
Viaarxiv icon

BEARS Make Neuro-Symbolic Models Aware of their Reasoning Shortcuts

Add code
Feb 19, 2024
Figure 1 for BEARS Make Neuro-Symbolic Models Aware of their Reasoning Shortcuts
Figure 2 for BEARS Make Neuro-Symbolic Models Aware of their Reasoning Shortcuts
Figure 3 for BEARS Make Neuro-Symbolic Models Aware of their Reasoning Shortcuts
Figure 4 for BEARS Make Neuro-Symbolic Models Aware of their Reasoning Shortcuts
Viaarxiv icon

Interpretability is in the Mind of the Beholder: A Causal Framework for Human-interpretable Representation Learning

Add code
Sep 14, 2023
Figure 1 for Interpretability is in the Mind of the Beholder: A Causal Framework for Human-interpretable Representation Learning
Figure 2 for Interpretability is in the Mind of the Beholder: A Causal Framework for Human-interpretable Representation Learning
Figure 3 for Interpretability is in the Mind of the Beholder: A Causal Framework for Human-interpretable Representation Learning
Figure 4 for Interpretability is in the Mind of the Beholder: A Causal Framework for Human-interpretable Representation Learning
Viaarxiv icon

How Faithful are Self-Explainable GNNs?

Add code
Aug 29, 2023
Figure 1 for How Faithful are Self-Explainable GNNs?
Figure 2 for How Faithful are Self-Explainable GNNs?
Viaarxiv icon

Learning to Guide Human Experts via Personalized Large Language Models

Add code
Aug 11, 2023
Figure 1 for Learning to Guide Human Experts via Personalized Large Language Models
Viaarxiv icon

Not All Neuro-Symbolic Concepts Are Created Equal: Analysis and Mitigation of Reasoning Shortcuts

Add code
May 31, 2023
Figure 1 for Not All Neuro-Symbolic Concepts Are Created Equal: Analysis and Mitigation of Reasoning Shortcuts
Figure 2 for Not All Neuro-Symbolic Concepts Are Created Equal: Analysis and Mitigation of Reasoning Shortcuts
Figure 3 for Not All Neuro-Symbolic Concepts Are Created Equal: Analysis and Mitigation of Reasoning Shortcuts
Figure 4 for Not All Neuro-Symbolic Concepts Are Created Equal: Analysis and Mitigation of Reasoning Shortcuts
Viaarxiv icon