Picture for Yunfeng Zhang

Yunfeng Zhang

Joint Optimization of AI Fairness and Utility: A Human-Centered Approach

Add code
Feb 05, 2020
Figure 1 for Joint Optimization of AI Fairness and Utility: A Human-Centered Approach
Figure 2 for Joint Optimization of AI Fairness and Utility: A Human-Centered Approach
Figure 3 for Joint Optimization of AI Fairness and Utility: A Human-Centered Approach
Figure 4 for Joint Optimization of AI Fairness and Utility: A Human-Centered Approach
Viaarxiv icon

Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience

Add code
Jan 31, 2020
Figure 1 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Figure 2 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Figure 3 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Figure 4 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Viaarxiv icon

Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness

Add code
Jan 13, 2020
Figure 1 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Figure 2 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Figure 3 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Figure 4 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Viaarxiv icon

Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making

Add code
Jan 07, 2020
Figure 1 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Figure 2 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Figure 3 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Figure 4 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Viaarxiv icon

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

Add code
Sep 14, 2019
Figure 1 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 2 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 3 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 4 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Viaarxiv icon

Bootstrapping Conversational Agents With Weak Supervision

Add code
Dec 14, 2018
Figure 1 for Bootstrapping Conversational Agents With Weak Supervision
Figure 2 for Bootstrapping Conversational Agents With Weak Supervision
Figure 3 for Bootstrapping Conversational Agents With Weak Supervision
Figure 4 for Bootstrapping Conversational Agents With Weak Supervision
Viaarxiv icon

Joint association and classification analysis of multi-view data

Add code
Nov 20, 2018
Figure 1 for Joint association and classification analysis of multi-view data
Figure 2 for Joint association and classification analysis of multi-view data
Figure 3 for Joint association and classification analysis of multi-view data
Figure 4 for Joint association and classification analysis of multi-view data
Viaarxiv icon

AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias

Add code
Oct 03, 2018
Figure 1 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 2 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 3 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 4 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Viaarxiv icon

Danger-aware Weighted Advantage Composition of Deep Reinforcement Learning for Robot Navigation

Add code
Sep 11, 2018
Figure 1 for Danger-aware Weighted Advantage Composition of Deep Reinforcement Learning for Robot Navigation
Figure 2 for Danger-aware Weighted Advantage Composition of Deep Reinforcement Learning for Robot Navigation
Figure 3 for Danger-aware Weighted Advantage Composition of Deep Reinforcement Learning for Robot Navigation
Figure 4 for Danger-aware Weighted Advantage Composition of Deep Reinforcement Learning for Robot Navigation
Viaarxiv icon