Picture for Bilal Alsallakh

Bilal Alsallakh

Bias Mitigation Framework for Intersectional Subgroups in Neural Networks

Add code
Dec 26, 2022
Figure 1 for Bias Mitigation Framework for Intersectional Subgroups in Neural Networks
Figure 2 for Bias Mitigation Framework for Intersectional Subgroups in Neural Networks
Figure 3 for Bias Mitigation Framework for Intersectional Subgroups in Neural Networks
Figure 4 for Bias Mitigation Framework for Intersectional Subgroups in Neural Networks
Viaarxiv icon

Prescriptive and Descriptive Approaches to Machine-Learning Transparency

Add code
Apr 27, 2022
Figure 1 for Prescriptive and Descriptive Approaches to Machine-Learning Transparency
Figure 2 for Prescriptive and Descriptive Approaches to Machine-Learning Transparency
Viaarxiv icon

A Tour of Visualization Techniques for Computer Vision Datasets

Add code
Apr 19, 2022
Figure 1 for A Tour of Visualization Techniques for Computer Vision Datasets
Figure 2 for A Tour of Visualization Techniques for Computer Vision Datasets
Figure 3 for A Tour of Visualization Techniques for Computer Vision Datasets
Figure 4 for A Tour of Visualization Techniques for Computer Vision Datasets
Viaarxiv icon

Investigating sanity checks for saliency maps with image and text classification

Add code
Jun 08, 2021
Figure 1 for Investigating sanity checks for saliency maps with image and text classification
Figure 2 for Investigating sanity checks for saliency maps with image and text classification
Figure 3 for Investigating sanity checks for saliency maps with image and text classification
Figure 4 for Investigating sanity checks for saliency maps with image and text classification
Viaarxiv icon

Investigating Saturation Effects in Integrated Gradients

Add code
Oct 23, 2020
Figure 1 for Investigating Saturation Effects in Integrated Gradients
Figure 2 for Investigating Saturation Effects in Integrated Gradients
Figure 3 for Investigating Saturation Effects in Integrated Gradients
Figure 4 for Investigating Saturation Effects in Integrated Gradients
Viaarxiv icon

Mind the Pad -- CNNs can Develop Blind Spots

Add code
Oct 05, 2020
Figure 1 for Mind the Pad -- CNNs can Develop Blind Spots
Figure 2 for Mind the Pad -- CNNs can Develop Blind Spots
Figure 3 for Mind the Pad -- CNNs can Develop Blind Spots
Figure 4 for Mind the Pad -- CNNs can Develop Blind Spots
Viaarxiv icon

Captum: A unified and generic model interpretability library for PyTorch

Add code
Sep 16, 2020
Figure 1 for Captum: A unified and generic model interpretability library for PyTorch
Figure 2 for Captum: A unified and generic model interpretability library for PyTorch
Figure 3 for Captum: A unified and generic model interpretability library for PyTorch
Figure 4 for Captum: A unified and generic model interpretability library for PyTorch
Viaarxiv icon

Visualizing Classification Structure in Deep Neural Networks

Add code
Jul 12, 2020
Figure 1 for Visualizing Classification Structure in Deep Neural Networks
Figure 2 for Visualizing Classification Structure in Deep Neural Networks
Figure 3 for Visualizing Classification Structure in Deep Neural Networks
Figure 4 for Visualizing Classification Structure in Deep Neural Networks
Viaarxiv icon

Prediction Scores as a Window into Classifier Behavior

Add code
Nov 18, 2017
Figure 1 for Prediction Scores as a Window into Classifier Behavior
Figure 2 for Prediction Scores as a Window into Classifier Behavior
Viaarxiv icon

Do Convolutional Neural Networks Learn Class Hierarchy?

Add code
Oct 17, 2017
Figure 1 for Do Convolutional Neural Networks Learn Class Hierarchy?
Figure 2 for Do Convolutional Neural Networks Learn Class Hierarchy?
Figure 3 for Do Convolutional Neural Networks Learn Class Hierarchy?
Figure 4 for Do Convolutional Neural Networks Learn Class Hierarchy?
Viaarxiv icon