Alert button
Picture for Ian E. Nielsen

Ian E. Nielsen

Alert button

Targeted Background Removal Creates Interpretable Feature Visualizations

Add code
Bookmark button
Alert button
Jun 22, 2023
Ian E. Nielsen, Erik Grundeland, Joseph Snedeker, Ghulam Rasool, Ravi P. Ramachandran

Figure 1 for Targeted Background Removal Creates Interpretable Feature Visualizations
Figure 2 for Targeted Background Removal Creates Interpretable Feature Visualizations
Figure 3 for Targeted Background Removal Creates Interpretable Feature Visualizations
Figure 4 for Targeted Background Removal Creates Interpretable Feature Visualizations
Viaarxiv icon

EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust Models

Add code
Bookmark button
Alert button
Mar 15, 2023
Ian E. Nielsen, Ravi P. Ramachandran, Nidhal Bouaynaya, Hassan M. Fathallah-Shaykh, Ghulam Rasool

Figure 1 for EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust Models
Figure 2 for EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust Models
Figure 3 for EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust Models
Figure 4 for EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust Models
Viaarxiv icon

Transformers in Time-series Analysis: A Tutorial

Add code
Bookmark button
Alert button
Apr 28, 2022
Sabeen Ahmed, Ian E. Nielsen, Aakash Tripathi, Shamoon Siddiqui, Ghulam Rasool, Ravi P. Ramachandran

Figure 1 for Transformers in Time-series Analysis: A Tutorial
Figure 2 for Transformers in Time-series Analysis: A Tutorial
Figure 3 for Transformers in Time-series Analysis: A Tutorial
Figure 4 for Transformers in Time-series Analysis: A Tutorial
Viaarxiv icon

Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks

Add code
Bookmark button
Alert button
Jul 28, 2021
Ian E. Nielsen, Dimah Dera, Ghulam Rasool, Nidhal Bouaynaya, Ravi P. Ramachandran

Figure 1 for Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks
Figure 2 for Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks
Figure 3 for Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks
Viaarxiv icon