Picture for Kushal Kafle

Kushal Kafle

OccamNets: Mitigating Dataset Bias by Favoring Simpler Hypotheses

Add code
Apr 11, 2022
Figure 1 for OccamNets: Mitigating Dataset Bias by Favoring Simpler Hypotheses
Figure 2 for OccamNets: Mitigating Dataset Bias by Favoring Simpler Hypotheses
Figure 3 for OccamNets: Mitigating Dataset Bias by Favoring Simpler Hypotheses
Figure 4 for OccamNets: Mitigating Dataset Bias by Favoring Simpler Hypotheses
Viaarxiv icon

Learning to Predict Visual Attributes in the Wild

Add code
Jun 17, 2021
Figure 1 for Learning to Predict Visual Attributes in the Wild
Figure 2 for Learning to Predict Visual Attributes in the Wild
Figure 3 for Learning to Predict Visual Attributes in the Wild
Figure 4 for Learning to Predict Visual Attributes in the Wild
Viaarxiv icon

An Investigation of Critical Issues in Bias Mitigation Techniques

Add code
Apr 01, 2021
Figure 1 for An Investigation of Critical Issues in Bias Mitigation Techniques
Figure 2 for An Investigation of Critical Issues in Bias Mitigation Techniques
Figure 3 for An Investigation of Critical Issues in Bias Mitigation Techniques
Figure 4 for An Investigation of Critical Issues in Bias Mitigation Techniques
Viaarxiv icon

On the Value of Out-of-Distribution Testing: An Example of Goodhart's Law

Add code
May 19, 2020
Viaarxiv icon

Do We Need Fully Connected Output Layers in Convolutional Networks?

Add code
Apr 29, 2020
Figure 1 for Do We Need Fully Connected Output Layers in Convolutional Networks?
Figure 2 for Do We Need Fully Connected Output Layers in Convolutional Networks?
Figure 3 for Do We Need Fully Connected Output Layers in Convolutional Networks?
Figure 4 for Do We Need Fully Connected Output Layers in Convolutional Networks?
Viaarxiv icon

A negative case analysis of visual grounding methods for VQA

Add code
Apr 15, 2020
Figure 1 for A negative case analysis of visual grounding methods for VQA
Figure 2 for A negative case analysis of visual grounding methods for VQA
Figure 3 for A negative case analysis of visual grounding methods for VQA
Figure 4 for A negative case analysis of visual grounding methods for VQA
Viaarxiv icon

REMIND Your Neural Network to Prevent Catastrophic Forgetting

Add code
Oct 06, 2019
Figure 1 for REMIND Your Neural Network to Prevent Catastrophic Forgetting
Figure 2 for REMIND Your Neural Network to Prevent Catastrophic Forgetting
Figure 3 for REMIND Your Neural Network to Prevent Catastrophic Forgetting
Figure 4 for REMIND Your Neural Network to Prevent Catastrophic Forgetting
Viaarxiv icon

Answering Questions about Data Visualizations using Efficient Bimodal Fusion

Add code
Aug 05, 2019
Figure 1 for Answering Questions about Data Visualizations using Efficient Bimodal Fusion
Figure 2 for Answering Questions about Data Visualizations using Efficient Bimodal Fusion
Figure 3 for Answering Questions about Data Visualizations using Efficient Bimodal Fusion
Figure 4 for Answering Questions about Data Visualizations using Efficient Bimodal Fusion
Viaarxiv icon

Challenges and Prospects in Vision and Language Research

Add code
May 24, 2019
Figure 1 for Challenges and Prospects in Vision and Language Research
Figure 2 for Challenges and Prospects in Vision and Language Research
Figure 3 for Challenges and Prospects in Vision and Language Research
Figure 4 for Challenges and Prospects in Vision and Language Research
Viaarxiv icon

Answer Them All! Toward Universal Visual Question Answering Models

Add code
Apr 05, 2019
Figure 1 for Answer Them All! Toward Universal Visual Question Answering Models
Figure 2 for Answer Them All! Toward Universal Visual Question Answering Models
Figure 3 for Answer Them All! Toward Universal Visual Question Answering Models
Figure 4 for Answer Them All! Toward Universal Visual Question Answering Models
Viaarxiv icon