Picture for Ana Lucic

Ana Lucic

Aurora: A Foundation Model of the Atmosphere

May 20, 2024
Figure 1 for Aurora: A Foundation Model of the Atmosphere
Figure 2 for Aurora: A Foundation Model of the Atmosphere
Figure 3 for Aurora: A Foundation Model of the Atmosphere
Figure 4 for Aurora: A Foundation Model of the Atmosphere
Viaarxiv icon

Clifford-Steerable Convolutional Neural Networks

Add code
Feb 22, 2024
Viaarxiv icon

Semi-Supervised Object Detection in the Open World

Jul 28, 2023
Figure 1 for Semi-Supervised Object Detection in the Open World
Figure 2 for Semi-Supervised Object Detection in the Open World
Figure 3 for Semi-Supervised Object Detection in the Open World
Figure 4 for Semi-Supervised Object Detection in the Open World
Viaarxiv icon

Explaining Predictions from Machine Learning Models: Algorithms, Users, and Pedagogy

Add code
Sep 12, 2022
Figure 1 for Explaining Predictions from Machine Learning Models: Algorithms, Users, and Pedagogy
Figure 2 for Explaining Predictions from Machine Learning Models: Algorithms, Users, and Pedagogy
Figure 3 for Explaining Predictions from Machine Learning Models: Algorithms, Users, and Pedagogy
Figure 4 for Explaining Predictions from Machine Learning Models: Algorithms, Users, and Pedagogy
Viaarxiv icon

Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users

Jul 06, 2022
Figure 1 for Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users
Figure 2 for Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users
Figure 3 for Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users
Viaarxiv icon

A Song of (Dis)agreement: Evaluating the Evaluation of Explainable Artificial Intelligence in Natural Language Processing

Add code
May 09, 2022
Figure 1 for A Song of (Dis)agreement: Evaluating the Evaluation of Explainable Artificial Intelligence in Natural Language Processing
Figure 2 for A Song of (Dis)agreement: Evaluating the Evaluation of Explainable Artificial Intelligence in Natural Language Processing
Figure 3 for A Song of (Dis)agreement: Evaluating the Evaluation of Explainable Artificial Intelligence in Natural Language Processing
Figure 4 for A Song of (Dis)agreement: Evaluating the Evaluation of Explainable Artificial Intelligence in Natural Language Processing
Viaarxiv icon

Teaching Fairness, Accountability, Confidentiality, and Transparency in Artificial Intelligence through the Lens of Reproducibility

Add code
Nov 09, 2021
Figure 1 for Teaching Fairness, Accountability, Confidentiality, and Transparency in Artificial Intelligence through the Lens of Reproducibility
Viaarxiv icon

Order in the Court: Explainable AI Methods Prone to Disagreement

Add code
May 07, 2021
Figure 1 for Order in the Court: Explainable AI Methods Prone to Disagreement
Figure 2 for Order in the Court: Explainable AI Methods Prone to Disagreement
Viaarxiv icon

To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions

Add code
Apr 14, 2021
Figure 1 for To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions
Figure 2 for To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions
Figure 3 for To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions
Figure 4 for To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions
Viaarxiv icon

CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks

Feb 05, 2021
Figure 1 for CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
Figure 2 for CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
Figure 3 for CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
Figure 4 for CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
Viaarxiv icon