Picture for Olga Russakovsky

Olga Russakovsky

"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

Add code
Oct 02, 2022
Figure 1 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 2 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 3 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Figure 4 for "Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Viaarxiv icon

SiRi: A Simple Selective Retraining Mechanism for Transformer-based Visual Grounding

Add code
Jul 27, 2022
Figure 1 for SiRi: A Simple Selective Retraining Mechanism for Transformer-based Visual Grounding
Figure 2 for SiRi: A Simple Selective Retraining Mechanism for Transformer-based Visual Grounding
Figure 3 for SiRi: A Simple Selective Retraining Mechanism for Transformer-based Visual Grounding
Figure 4 for SiRi: A Simple Selective Retraining Mechanism for Transformer-based Visual Grounding
Viaarxiv icon

Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability

Add code
Jul 20, 2022
Figure 1 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 2 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 3 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 4 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Viaarxiv icon

Gender Artifacts in Visual Datasets

Add code
Jun 18, 2022
Figure 1 for Gender Artifacts in Visual Datasets
Figure 2 for Gender Artifacts in Visual Datasets
Figure 3 for Gender Artifacts in Visual Datasets
Figure 4 for Gender Artifacts in Visual Datasets
Viaarxiv icon

ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features

Add code
Jun 16, 2022
Figure 1 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Figure 2 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Figure 3 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Figure 4 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Viaarxiv icon

Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks

Add code
Jun 06, 2022
Figure 1 for Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks
Figure 2 for Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks
Figure 3 for Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks
Figure 4 for Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks
Viaarxiv icon

Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation

Add code
May 10, 2022
Figure 1 for Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation
Figure 2 for Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation
Figure 3 for Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation
Figure 4 for Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation
Viaarxiv icon

CARETS: A Consistency And Robustness Evaluative Test Suite for VQA

Add code
Mar 15, 2022
Figure 1 for CARETS: A Consistency And Robustness Evaluative Test Suite for VQA
Figure 2 for CARETS: A Consistency And Robustness Evaluative Test Suite for VQA
Figure 3 for CARETS: A Consistency And Robustness Evaluative Test Suite for VQA
Figure 4 for CARETS: A Consistency And Robustness Evaluative Test Suite for VQA
Viaarxiv icon

HIVE: Evaluating the Human Interpretability of Visual Explanations

Add code
Jan 10, 2022
Figure 1 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Figure 2 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Figure 3 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Figure 4 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Viaarxiv icon

Multi-query Video Retrieval

Add code
Jan 10, 2022
Figure 1 for Multi-query Video Retrieval
Figure 2 for Multi-query Video Retrieval
Figure 3 for Multi-query Video Retrieval
Figure 4 for Multi-query Video Retrieval
Viaarxiv icon