Picture for Vivek Miglani

Vivek Miglani

Using Captum to Explain Generative Language Models

Add code
Dec 09, 2023
Viaarxiv icon

Bias Mitigation Framework for Intersectional Subgroups in Neural Networks

Add code
Dec 26, 2022
Viaarxiv icon

Investigating sanity checks for saliency maps with image and text classification

Add code
Jun 08, 2021
Figure 1 for Investigating sanity checks for saliency maps with image and text classification
Figure 2 for Investigating sanity checks for saliency maps with image and text classification
Figure 3 for Investigating sanity checks for saliency maps with image and text classification
Figure 4 for Investigating sanity checks for saliency maps with image and text classification
Viaarxiv icon

Investigating Saturation Effects in Integrated Gradients

Add code
Oct 23, 2020
Figure 1 for Investigating Saturation Effects in Integrated Gradients
Figure 2 for Investigating Saturation Effects in Integrated Gradients
Figure 3 for Investigating Saturation Effects in Integrated Gradients
Figure 4 for Investigating Saturation Effects in Integrated Gradients
Viaarxiv icon

Mind the Pad -- CNNs can Develop Blind Spots

Add code
Oct 05, 2020
Figure 1 for Mind the Pad -- CNNs can Develop Blind Spots
Figure 2 for Mind the Pad -- CNNs can Develop Blind Spots
Figure 3 for Mind the Pad -- CNNs can Develop Blind Spots
Figure 4 for Mind the Pad -- CNNs can Develop Blind Spots
Viaarxiv icon

Captum: A unified and generic model interpretability library for PyTorch

Add code
Sep 16, 2020
Figure 1 for Captum: A unified and generic model interpretability library for PyTorch
Figure 2 for Captum: A unified and generic model interpretability library for PyTorch
Figure 3 for Captum: A unified and generic model interpretability library for PyTorch
Figure 4 for Captum: A unified and generic model interpretability library for PyTorch
Viaarxiv icon