Alert button
Picture for Yunfeng Zhang

Yunfeng Zhang

Alert button

Joint Optimization of AI Fairness and Utility: A Human-Centered Approach

Add code
Bookmark button
Alert button
Feb 05, 2020
Yunfeng Zhang, Rachel K. E. Bellamy, Kush R. Varshney

Figure 1 for Joint Optimization of AI Fairness and Utility: A Human-Centered Approach
Figure 2 for Joint Optimization of AI Fairness and Utility: A Human-Centered Approach
Figure 3 for Joint Optimization of AI Fairness and Utility: A Human-Centered Approach
Figure 4 for Joint Optimization of AI Fairness and Utility: A Human-Centered Approach
Viaarxiv icon

Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience

Add code
Bookmark button
Alert button
Jan 31, 2020
Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Rachel Bellamy, Klaus Mueller

Figure 1 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Figure 2 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Figure 3 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Figure 4 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Viaarxiv icon

Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness

Add code
Bookmark button
Alert button
Jan 13, 2020
Michael Hind, Dennis Wei, Yunfeng Zhang

Figure 1 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Figure 2 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Figure 3 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Figure 4 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Viaarxiv icon

Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making

Add code
Bookmark button
Alert button
Jan 07, 2020
Yunfeng Zhang, Q. Vera Liao, Rachel K. E. Bellamy

Figure 1 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Figure 2 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Figure 3 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Figure 4 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Viaarxiv icon

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

Add code
Bookmark button
Alert button
Sep 14, 2019
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang

Figure 1 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 2 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 3 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 4 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Viaarxiv icon

Bootstrapping Conversational Agents With Weak Supervision

Add code
Bookmark button
Alert button
Dec 14, 2018
Neil Mallinar, Abhishek Shah, Rajendra Ugrani, Ayush Gupta, Manikandan Gurusankar, Tin Kam Ho, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, Robert Yates, Chris Desmarais, Blake McGregor

Figure 1 for Bootstrapping Conversational Agents With Weak Supervision
Figure 2 for Bootstrapping Conversational Agents With Weak Supervision
Figure 3 for Bootstrapping Conversational Agents With Weak Supervision
Figure 4 for Bootstrapping Conversational Agents With Weak Supervision
Viaarxiv icon

Joint association and classification analysis of multi-view data

Add code
Bookmark button
Alert button
Nov 20, 2018
Yunfeng Zhang, Irina Gaynanova

Figure 1 for Joint association and classification analysis of multi-view data
Figure 2 for Joint association and classification analysis of multi-view data
Figure 3 for Joint association and classification analysis of multi-view data
Figure 4 for Joint association and classification analysis of multi-view data
Viaarxiv icon

AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias

Add code
Bookmark button
Alert button
Oct 03, 2018
Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush R. Varshney, Yunfeng Zhang

Figure 1 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 2 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 3 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 4 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Viaarxiv icon