Picture for Q. Vera Liao

Q. Vera Liao

Expanding Explainability: Towards Social Transparency in AI systems

Add code
Jan 12, 2021
Figure 1 for Expanding Explainability: Towards Social Transparency in AI systems
Figure 2 for Expanding Explainability: Towards Social Transparency in AI systems
Figure 3 for Expanding Explainability: Towards Social Transparency in AI systems
Viaarxiv icon

How Much Automation Does a Data Scientist Want?

Add code
Jan 07, 2021
Figure 1 for How Much Automation Does a Data Scientist Want?
Figure 2 for How Much Automation Does a Data Scientist Want?
Figure 3 for How Much Automation Does a Data Scientist Want?
Figure 4 for How Much Automation Does a Data Scientist Want?
Viaarxiv icon

Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty

Add code
Nov 15, 2020
Figure 1 for Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty
Figure 2 for Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty
Figure 3 for Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty
Figure 4 for Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty
Viaarxiv icon

Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation

Add code
Sep 06, 2020
Figure 1 for Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation
Viaarxiv icon

Measuring Social Biases of Crowd Workers using Counterfactual Queries

Add code
Apr 04, 2020
Figure 1 for Measuring Social Biases of Crowd Workers using Counterfactual Queries
Viaarxiv icon

Questioning the AI: Informing Design Practices for Explainable AI User Experiences

Add code
Feb 08, 2020
Figure 1 for Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Figure 2 for Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Figure 3 for Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Viaarxiv icon

Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience

Add code
Jan 31, 2020
Figure 1 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Figure 2 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Figure 3 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Figure 4 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Viaarxiv icon

Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making

Add code
Jan 07, 2020
Figure 1 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Figure 2 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Figure 3 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Figure 4 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Viaarxiv icon

Enabling Value Sensitive AI Systems through Participatory Design Fictions

Add code
Dec 13, 2019
Viaarxiv icon

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

Add code
Sep 14, 2019
Figure 1 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 2 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 3 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 4 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Viaarxiv icon