Alert button
Picture for Vikram V. Ramaswamy

Vikram V. Ramaswamy

Alert button

UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs

Add code
Bookmark button
Alert button
Mar 27, 2023
Vikram V. Ramaswamy, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky

Figure 1 for UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
Figure 2 for UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
Figure 3 for UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
Figure 4 for UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
Viaarxiv icon

Beyond web-scraping: Crowd-sourcing a geographically diverse image dataset

Add code
Bookmark button
Alert button
Jan 05, 2023
Vikram V. Ramaswamy, Sing Yu Lin, Dora Zhao, Aaron B. Adcock, Laurens van der Maaten, Deepti Ghadiyaram, Olga Russakovsky

Figure 1 for Beyond web-scraping: Crowd-sourcing a geographically diverse image dataset
Figure 2 for Beyond web-scraping: Crowd-sourcing a geographically diverse image dataset
Figure 3 for Beyond web-scraping: Crowd-sourcing a geographically diverse image dataset
Figure 4 for Beyond web-scraping: Crowd-sourcing a geographically diverse image dataset
Viaarxiv icon

Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability

Add code
Bookmark button
Alert button
Jul 20, 2022
Vikram V. Ramaswamy, Sunnie S. Y. Kim, Ruth Fong, Olga Russakovsky

Figure 1 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 2 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 3 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Figure 4 for Overlooked factors in concept-based explanations: Dataset choice, concept salience, and human capability
Viaarxiv icon

Gender Artifacts in Visual Datasets

Add code
Bookmark button
Alert button
Jun 18, 2022
Nicole Meister, Dora Zhao, Angelina Wang, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky

Figure 1 for Gender Artifacts in Visual Datasets
Figure 2 for Gender Artifacts in Visual Datasets
Figure 3 for Gender Artifacts in Visual Datasets
Figure 4 for Gender Artifacts in Visual Datasets
Viaarxiv icon

ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features

Add code
Bookmark button
Alert button
Jun 16, 2022
Vikram V. Ramaswamy, Sunnie S. Y. Kim, Nicole Meister, Ruth Fong, Olga Russakovsky

Figure 1 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Figure 2 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Figure 3 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Figure 4 for ELUDE: Generating interpretable explanations via a decomposition into labelled and unlabelled features
Viaarxiv icon

Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation

Add code
Bookmark button
Alert button
May 10, 2022
Angelina Wang, Vikram V. Ramaswamy, Olga Russakovsky

Figure 1 for Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation
Figure 2 for Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation
Figure 3 for Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation
Figure 4 for Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation
Viaarxiv icon

HIVE: Evaluating the Human Interpretability of Visual Explanations

Add code
Bookmark button
Alert button
Jan 10, 2022
Sunnie S. Y. Kim, Nicole Meister, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky

Figure 1 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Figure 2 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Figure 3 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Figure 4 for HIVE: Evaluating the Human Interpretability of Visual Explanations
Viaarxiv icon

Fair Attribute Classification through Latent Space De-biasing

Add code
Bookmark button
Alert button
Dec 04, 2020
Vikram V. Ramaswamy, Sunnie S. Y. Kim, Olga Russakovsky

Figure 1 for Fair Attribute Classification through Latent Space De-biasing
Figure 2 for Fair Attribute Classification through Latent Space De-biasing
Figure 3 for Fair Attribute Classification through Latent Space De-biasing
Figure 4 for Fair Attribute Classification through Latent Space De-biasing
Viaarxiv icon