Alert button
Picture for Rachel K. E. Bellamy

Rachel K. E. Bellamy

Alert button

AI Explainability 360: Impact and Design

Add code
Bookmark button
Alert button
Sep 24, 2021
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang

Figure 1 for AI Explainability 360: Impact and Design
Figure 2 for AI Explainability 360: Impact and Design
Figure 3 for AI Explainability 360: Impact and Design
Figure 4 for AI Explainability 360: Impact and Design
Viaarxiv icon

Joint Optimization of AI Fairness and Utility: A Human-Centered Approach

Add code
Bookmark button
Alert button
Feb 05, 2020
Yunfeng Zhang, Rachel K. E. Bellamy, Kush R. Varshney

Figure 1 for Joint Optimization of AI Fairness and Utility: A Human-Centered Approach
Figure 2 for Joint Optimization of AI Fairness and Utility: A Human-Centered Approach
Figure 3 for Joint Optimization of AI Fairness and Utility: A Human-Centered Approach
Figure 4 for Joint Optimization of AI Fairness and Utility: A Human-Centered Approach
Viaarxiv icon

Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making

Add code
Bookmark button
Alert button
Jan 07, 2020
Yunfeng Zhang, Q. Vera Liao, Rachel K. E. Bellamy

Figure 1 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Figure 2 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Figure 3 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Figure 4 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Viaarxiv icon

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

Add code
Bookmark button
Alert button
Sep 14, 2019
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang

Figure 1 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 2 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 3 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 4 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Viaarxiv icon

Bootstrapping Conversational Agents With Weak Supervision

Add code
Bookmark button
Alert button
Dec 14, 2018
Neil Mallinar, Abhishek Shah, Rajendra Ugrani, Ayush Gupta, Manikandan Gurusankar, Tin Kam Ho, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, Robert Yates, Chris Desmarais, Blake McGregor

Figure 1 for Bootstrapping Conversational Agents With Weak Supervision
Figure 2 for Bootstrapping Conversational Agents With Weak Supervision
Figure 3 for Bootstrapping Conversational Agents With Weak Supervision
Figure 4 for Bootstrapping Conversational Agents With Weak Supervision
Viaarxiv icon

AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias

Add code
Bookmark button
Alert button
Oct 03, 2018
Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush R. Varshney, Yunfeng Zhang

Figure 1 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 2 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 3 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 4 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Viaarxiv icon

Visualizations for an Explainable Planning Agent

Add code
Bookmark button
Alert button
Feb 08, 2018
Tathagata Chakraborti, Kshitij P. Fadnis, Kartik Talamadupula, Mishal Dholakia, Biplav Srivastava, Jeffrey O. Kephart, Rachel K. E. Bellamy

Figure 1 for Visualizations for an Explainable Planning Agent
Figure 2 for Visualizations for an Explainable Planning Agent
Figure 3 for Visualizations for an Explainable Planning Agent
Figure 4 for Visualizations for an Explainable Planning Agent
Viaarxiv icon