Get our free extension to see links to code for papers anywhere online!

 Add to Chrome

 Add to Firefox

CatalyzeX Code Finder - Browser extension linking code for ML papers across the web! | Product Hunt Embed
Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty

Nov 15, 2020
Umang Bhatt, Yunfeng Zhang, Javier Antor├ín, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melan├žon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Adrian Weller, Alice Xiang

* 19 pages, 6 figures 

  Access Paper or Ask Questions

Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation

Sep 06, 2020
Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Klaus Mueller

* Accepted at Workshop on Data Science with Human in the Loop (DaSH) @ ACM SIGKDD 2020 

  Access Paper or Ask Questions

Measuring Social Biases of Crowd Workers using Counterfactual Queries

Apr 04, 2020
Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Klaus Mueller

* Accepted at the Workshop on Fair and Responsible AI at ACM CHI 2020 

  Access Paper or Ask Questions

Questioning the AI: Informing Design Practices for Explainable AI User Experiences

Feb 08, 2020
Q. Vera Liao, Daniel Gruen, Sarah Miller

* Working draft. To appear in the ACM CHI Conference on Human Factors in Computing Systems (CHI 2020) 

  Access Paper or Ask Questions

Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience

Jan 31, 2020
Bhavya Ghai, Q. Vera Liao, Yunfeng Zhang, Rachel Bellamy, Klaus Mueller

* working draft 

  Access Paper or Ask Questions

Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making

Jan 07, 2020
Yunfeng Zhang, Q. Vera Liao, Rachel K. E. Bellamy


  Access Paper or Ask Questions

Enabling Value Sensitive AI Systems through Participatory Design Fictions

Dec 13, 2019
Q. Vera Liao, Michael Muller


  Access Paper or Ask Questions

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

Sep 14, 2019
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovi─ç, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang


  Access Paper or Ask Questions

Tell Me About Yourself: Using an AI-Powered Chatbot to Conduct Conversational Surveys

May 25, 2019
Ziang Xiao, Michelle X. Zhou, Q. Vera Liao, Gloria Mark, Changyan Chi, Wenxi Chen, Huahai Yang

* Currently under review 

  Access Paper or Ask Questions

Bootstrapping Conversational Agents With Weak Supervision

Dec 14, 2018
Neil Mallinar, Abhishek Shah, Rajendra Ugrani, Ayush Gupta, Manikandan Gurusankar, Tin Kam Ho, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, Robert Yates, Chris Desmarais, Blake McGregor

* 6 pages, 3 figures, 1 table, Accepted for publication in IAAI 2019 

  Access Paper or Ask Questions