Alert button
Picture for Ana Lucic

Ana Lucic

Alert button

Clifford-Steerable Convolutional Neural Networks

Add code
Bookmark button
Alert button
Feb 22, 2024
Maksim Zhdanov, David Ruhe, Maurice Weiler, Ana Lucic, Johannes Brandstetter, Patrick Forré

Viaarxiv icon

Semi-Supervised Object Detection in the Open World

Add code
Bookmark button
Alert button
Jul 28, 2023
Garvita Allabadi, Ana Lucic, Peter Pao-Huang, Yu-Xiong Wang, Vikram Adve

Figure 1 for Semi-Supervised Object Detection in the Open World
Figure 2 for Semi-Supervised Object Detection in the Open World
Figure 3 for Semi-Supervised Object Detection in the Open World
Figure 4 for Semi-Supervised Object Detection in the Open World
Viaarxiv icon

Explaining Predictions from Machine Learning Models: Algorithms, Users, and Pedagogy

Add code
Bookmark button
Alert button
Sep 12, 2022
Ana Lucic

Figure 1 for Explaining Predictions from Machine Learning Models: Algorithms, Users, and Pedagogy
Figure 2 for Explaining Predictions from Machine Learning Models: Algorithms, Users, and Pedagogy
Figure 3 for Explaining Predictions from Machine Learning Models: Algorithms, Users, and Pedagogy
Figure 4 for Explaining Predictions from Machine Learning Models: Algorithms, Users, and Pedagogy
Viaarxiv icon

Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users

Add code
Bookmark button
Alert button
Jul 06, 2022
Ana Lucic, Sheeraz Ahmad, Amanda Furtado Brinhosa, Vera Liao, Himani Agrawal, Umang Bhatt, Krishnaram Kenthapadi, Alice Xiang, Maarten de Rijke, Nicholas Drabowski

Figure 1 for Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users
Figure 2 for Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users
Figure 3 for Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users
Viaarxiv icon

A Song of (Dis)agreement: Evaluating the Evaluation of Explainable Artificial Intelligence in Natural Language Processing

Add code
Bookmark button
Alert button
May 09, 2022
Michael Neely, Stefan F. Schouten, Maurits Bleeker, Ana Lucic

Figure 1 for A Song of (Dis)agreement: Evaluating the Evaluation of Explainable Artificial Intelligence in Natural Language Processing
Figure 2 for A Song of (Dis)agreement: Evaluating the Evaluation of Explainable Artificial Intelligence in Natural Language Processing
Figure 3 for A Song of (Dis)agreement: Evaluating the Evaluation of Explainable Artificial Intelligence in Natural Language Processing
Figure 4 for A Song of (Dis)agreement: Evaluating the Evaluation of Explainable Artificial Intelligence in Natural Language Processing
Viaarxiv icon

Teaching Fairness, Accountability, Confidentiality, and Transparency in Artificial Intelligence through the Lens of Reproducibility

Add code
Bookmark button
Alert button
Nov 09, 2021
Ana Lucic, Maurits Bleeker, Sami Jullien, Samarth Bhargav, Maarten de Rijke

Figure 1 for Teaching Fairness, Accountability, Confidentiality, and Transparency in Artificial Intelligence through the Lens of Reproducibility
Viaarxiv icon

Order in the Court: Explainable AI Methods Prone to Disagreement

Add code
Bookmark button
Alert button
May 07, 2021
Michael Neely, Stefan F. Schouten, Maurits J. R. Bleeker, Ana Lucic

Figure 1 for Order in the Court: Explainable AI Methods Prone to Disagreement
Figure 2 for Order in the Court: Explainable AI Methods Prone to Disagreement
Viaarxiv icon

To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions

Add code
Bookmark button
Alert button
Apr 14, 2021
Kim de Bie, Ana Lucic, Hinda Haned

Figure 1 for To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions
Figure 2 for To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions
Figure 3 for To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions
Figure 4 for To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions
Viaarxiv icon

CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks

Add code
Bookmark button
Alert button
Feb 05, 2021
Ana Lucic, Maartje ter Hoeve, Gabriele Tolomei, Maarten de Rijke, Fabrizio Silvestri

Figure 1 for CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
Figure 2 for CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
Figure 3 for CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
Figure 4 for CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
Viaarxiv icon