Alert button
Picture for Karthikeyan Natesan Ramamurthy

Karthikeyan Natesan Ramamurthy

Alert button

Understanding racial bias in health using the Medical Expenditure Panel Survey data

Add code
Bookmark button
Alert button
Nov 04, 2019
Moninder Singh, Karthikeyan Natesan Ramamurthy

Figure 1 for Understanding racial bias in health using the Medical Expenditure Panel Survey data
Figure 2 for Understanding racial bias in health using the Medical Expenditure Panel Survey data
Figure 3 for Understanding racial bias in health using the Medical Expenditure Panel Survey data
Figure 4 for Understanding racial bias in health using the Medical Expenditure Panel Survey data
Viaarxiv icon

Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning

Add code
Bookmark button
Alert button
Jun 05, 2019
Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, Aleksandra Mojsilović

Figure 1 for Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning
Figure 2 for Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning
Viaarxiv icon

PI-Net: A Deep Learning Approach to Extract Topological Persistence Images

Add code
Bookmark button
Alert button
Jun 05, 2019
Anirudh Som, Hongjun Choi, Karthikeyan Natesan Ramamurthy, Matthew Buman, Pavan Turaga

Figure 1 for PI-Net: A Deep Learning Approach to Extract Topological Persistence Images
Figure 2 for PI-Net: A Deep Learning Approach to Extract Topological Persistence Images
Figure 3 for PI-Net: A Deep Learning Approach to Extract Topological Persistence Images
Figure 4 for PI-Net: A Deep Learning Approach to Extract Topological Persistence Images
Viaarxiv icon

Optimized Score Transformation for Fair Classification

Add code
Bookmark button
Alert button
May 31, 2019
Dennis Wei, Karthikeyan Natesan Ramamurthy, Flavio du Pin Calmon

Figure 1 for Optimized Score Transformation for Fair Classification
Figure 2 for Optimized Score Transformation for Fair Classification
Figure 3 for Optimized Score Transformation for Fair Classification
Figure 4 for Optimized Score Transformation for Fair Classification
Viaarxiv icon

Counting and Segmenting Sorghum Heads

Add code
Bookmark button
Alert button
May 30, 2019
Min-hwan Oh, Peder Olsen, Karthikeyan Natesan Ramamurthy

Figure 1 for Counting and Segmenting Sorghum Heads
Figure 2 for Counting and Segmenting Sorghum Heads
Figure 3 for Counting and Segmenting Sorghum Heads
Figure 4 for Counting and Segmenting Sorghum Heads
Viaarxiv icon

Crowd Counting with Decomposed Uncertainty

Add code
Bookmark button
Alert button
Mar 15, 2019
Min-hwan Oh, Peder A. Olsen, Karthikeyan Natesan Ramamurthy

Figure 1 for Crowd Counting with Decomposed Uncertainty
Figure 2 for Crowd Counting with Decomposed Uncertainty
Figure 3 for Crowd Counting with Decomposed Uncertainty
Figure 4 for Crowd Counting with Decomposed Uncertainty
Viaarxiv icon

Bias Mitigation Post-processing for Individual and Group Fairness

Add code
Bookmark button
Alert button
Dec 14, 2018
Pranay K. Lohia, Karthikeyan Natesan Ramamurthy, Manish Bhide, Diptikalyan Saha, Kush R. Varshney, Ruchir Puri

Figure 1 for Bias Mitigation Post-processing for Individual and Group Fairness
Figure 2 for Bias Mitigation Post-processing for Individual and Group Fairness
Figure 3 for Bias Mitigation Post-processing for Individual and Group Fairness
Figure 4 for Bias Mitigation Post-processing for Individual and Group Fairness
Viaarxiv icon

TED: Teaching AI to Explain its Decisions

Add code
Bookmark button
Alert button
Nov 12, 2018
Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, Aleksandra Mojsilovic

Figure 1 for TED: Teaching AI to Explain its Decisions
Figure 2 for TED: Teaching AI to Explain its Decisions
Figure 3 for TED: Teaching AI to Explain its Decisions
Viaarxiv icon

AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias

Add code
Bookmark button
Alert button
Oct 03, 2018
Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush R. Varshney, Yunfeng Zhang

Figure 1 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 2 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 3 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 4 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Viaarxiv icon

Teaching Meaningful Explanations

Add code
Bookmark button
Alert button
Sep 11, 2018
Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, Aleksandra Mojsilovic

Figure 1 for Teaching Meaningful Explanations
Figure 2 for Teaching Meaningful Explanations
Figure 3 for Teaching Meaningful Explanations
Viaarxiv icon