Alert button
Picture for Q. Vera Liao

Q. Vera Liao

Alert button

Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation

Add code
Bookmark button
Alert button
Sep 06, 2020
Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Klaus Mueller

Figure 1 for Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation
Viaarxiv icon

Measuring Social Biases of Crowd Workers using Counterfactual Queries

Add code
Bookmark button
Alert button
Apr 04, 2020
Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Klaus Mueller

Figure 1 for Measuring Social Biases of Crowd Workers using Counterfactual Queries
Viaarxiv icon

Questioning the AI: Informing Design Practices for Explainable AI User Experiences

Add code
Bookmark button
Alert button
Feb 08, 2020
Q. Vera Liao, Daniel Gruen, Sarah Miller

Figure 1 for Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Figure 2 for Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Figure 3 for Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Viaarxiv icon

Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience

Add code
Bookmark button
Alert button
Jan 31, 2020
Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Rachel Bellamy, Klaus Mueller

Figure 1 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Figure 2 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Figure 3 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Figure 4 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Viaarxiv icon

Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making

Add code
Bookmark button
Alert button
Jan 07, 2020
Yunfeng Zhang, Q. Vera Liao, Rachel K. E. Bellamy

Figure 1 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Figure 2 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Figure 3 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Figure 4 for Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Viaarxiv icon

Enabling Value Sensitive AI Systems through Participatory Design Fictions

Add code
Bookmark button
Alert button
Dec 13, 2019
Q. Vera Liao, Michael Muller

Viaarxiv icon

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

Add code
Bookmark button
Alert button
Sep 14, 2019
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang

Figure 1 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 2 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 3 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 4 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Viaarxiv icon