Alert button
Picture for Michael Muller

Michael Muller

Alert button

Human-Centered Responsible Artificial Intelligence: Current & Future Trends

Feb 16, 2023
Mohammad Tahaei, Marios Constantinides, Daniele Quercia, Sean Kennedy, Michael Muller, Simone Stumpf, Q. Vera Liao, Ricardo Baeza-Yates, Lora Aroyo, Jess Holbrook, Ewa Luger, Michael Madaio, Ilana Golbin Blumenfeld, Maria De-Arteaga, Jessica Vitak, Alexandra Olteanu

Figure 1 for Human-Centered Responsible Artificial Intelligence: Current & Future Trends

In recent years, the CHI community has seen significant growth in research on Human-Centered Responsible Artificial Intelligence. While different research communities may use different terminology to discuss similar topics, all of this work is ultimately aimed at developing AI that benefits humanity while being grounded in human rights and ethics, and reducing the potential harms of AI. In this special interest group, we aim to bring together researchers from academia and industry interested in these topics to map current and future research trends to advance this important area of research by fostering collaboration and sharing ideas.

* To appear in Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems 
Viaarxiv icon

A Case Study in Engineering a Conversational Programming Assistant's Persona

Jan 13, 2023
Steven I. Ross, Michael Muller, Fernando Martinez, Stephanie Houde, Justin D. Weisz

The Programmer's Assistant is an experimental prototype software development environment that integrates a chatbot with a code editor. Conversational capability was achieved by using an existing code-fluent Large Language Model and providing it with a prompt that establishes a conversational interaction pattern, a set of conventions, and a style of interaction appropriate for the application. A discussion of the evolution of the prompt provides a case study in how to coax an existing foundation model to behave in a desirable manner for a particular application.

* 11 pages. Submitted to the 4th Workshop on Human-AI Co-Creation with Generative Models (HAI-GEN) at IUI 2023 
Viaarxiv icon

Toward General Design Principles for Generative AI Applications

Jan 13, 2023
Justin D. Weisz, Michael Muller, Jessica He, Stephanie Houde

Figure 1 for Toward General Design Principles for Generative AI Applications

Generative AI technologies are growing in power, utility, and use. As generative technologies are being incorporated into mainstream applications, there is a need for guidance on how to design those applications to foster productive and safe use. Based on recent research on human-AI co-creation within the HCI and AI communities, we present a set of seven principles for the design of generative AI applications. These principles are grounded in an environment of generative variability. Six principles are focused on designing for characteristics of generative AI: multiple outcomes & imperfection; exploration & control; and mental models & explanations. In addition, we urge designers to design against potential harms that may be caused by a generative model's hazardous output, misuse, or potential for human displacement. We anticipate these principles to usefully inform design decisions made in the creation of novel human-AI applications, and we invite the community to apply, revise, and extend these principles to their own work.

* 16 pages, 1 figure. Submitted to the 4th Workshop on Human-AI Co-Creation with Generative Models (HAI-GEN) at IUI 2023 
Viaarxiv icon

Investigating Explainability of Generative AI for Code through Scenario-based Design

Feb 10, 2022
Jiao Sun, Q. Vera Liao, Michael Muller, Mayank Agarwal, Stephanie Houde, Kartik Talamadupula, Justin D. Weisz

Figure 1 for Investigating Explainability of Generative AI for Code through Scenario-based Design
Figure 2 for Investigating Explainability of Generative AI for Code through Scenario-based Design
Figure 3 for Investigating Explainability of Generative AI for Code through Scenario-based Design
Figure 4 for Investigating Explainability of Generative AI for Code through Scenario-based Design

What does it mean for a generative AI model to be explainable? The emergent discipline of explainable AI (XAI) has made great strides in helping people understand discriminative models. Less attention has been paid to generative models that produce artifacts, rather than decisions, as output. Meanwhile, generative AI (GenAI) technologies are maturing and being applied to application domains such as software engineering. Using scenario-based design and question-driven XAI design approaches, we explore users' explainability needs for GenAI in three software engineering use cases: natural language to code, code translation, and code auto-completion. We conducted 9 workshops with 43 software engineers in which real examples from state-of-the-art generative AI models were used to elicit users' explainability needs. Drawing from prior work, we also propose 4 types of XAI features for GenAI for code and gathered additional design ideas from participants. Our work explores explainability needs for GenAI for code and demonstrates how human-centered approaches can drive the technical development of XAI in novel domains.

Viaarxiv icon

Using Document Similarity Methods to create Parallel Datasets for Code Translation

Oct 11, 2021
Mayank Agarwal, Kartik Talamadupula, Fernando Martinez, Stephanie Houde, Michael Muller, John Richards, Steven I Ross, Justin D. Weisz

Figure 1 for Using Document Similarity Methods to create Parallel Datasets for Code Translation
Figure 2 for Using Document Similarity Methods to create Parallel Datasets for Code Translation
Figure 3 for Using Document Similarity Methods to create Parallel Datasets for Code Translation
Figure 4 for Using Document Similarity Methods to create Parallel Datasets for Code Translation

Translating source code from one programming language to another is a critical, time-consuming task in modernizing legacy applications and codebases. Recent work in this space has drawn inspiration from the software naturalness hypothesis by applying natural language processing techniques towards automating the code translation task. However, due to the paucity of parallel data in this domain, supervised techniques have only been applied to a limited set of popular programming languages. To bypass this limitation, unsupervised neural machine translation techniques have been proposed to learn code translation using only monolingual corpora. In this work, we propose to use document similarity methods to create noisy parallel datasets of code, thus enabling supervised techniques to be applied for automated code translation without having to rely on the availability or expensive curation of parallel code datasets. We explore the noise tolerance of models trained on such automatically-created datasets and show that these models perform comparably to models trained on ground truth for reasonable levels of noise. Finally, we exhibit the practical utility of the proposed method by creating parallel datasets for languages beyond the ones explored in prior work, thus expanding the set of programming languages for automated code translation.

Viaarxiv icon

The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations

Jul 28, 2021
Upol Ehsan, Samir Passi, Q. Vera Liao, Larry Chan, I-Hsiang Lee, Michael Muller, Mark O. Riedl

Figure 1 for The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations
Figure 2 for The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations
Figure 3 for The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations
Figure 4 for The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations

Explainability of AI systems is critical for users to take informed actions and hold systems accountable. While "opening the opaque box" is important, understanding who opens the box can govern if the Human-AI interaction is effective. In this paper, we conduct a mixed-methods study of how two different groups of whos--people with and without a background in AI--perceive different types of AI explanations. These groups were chosen to look at how disparities in AI backgrounds can exacerbate the creator-consumer gap. We quantitatively share what the perceptions are along five dimensions: confidence, intelligence, understandability, second chance, and friendliness. Qualitatively, we highlight how the AI background influences each group's interpretations and elucidate why the differences might exist through the lenses of appropriation and cognitive heuristics. We find that (1) both groups had unwarranted faith in numbers, to different extents and for different reasons, (2) each group found explanatory values in different explanations that went beyond the usage we designed them for, and (3) each group had different requirements of what counts as humanlike explanations. Using our findings, we discuss potential negative consequences such as harmful manipulation of user trust and propose design interventions to mitigate them. By bringing conscious awareness to how and why AI backgrounds shape perceptions of potential creators and consumers in XAI, our work takes a formative step in advancing a pluralistic Human-centered Explainable AI discourse.

Viaarxiv icon

How AI Developers Overcome Communication Challenges in a Multidisciplinary Team: A Case Study

Jan 13, 2021
David Piorkowski, Soya Park, April Yi Wang, Dakuo Wang, Michael Muller, Felix Portnoy

Figure 1 for How AI Developers Overcome Communication Challenges in a Multidisciplinary Team: A Case Study
Figure 2 for How AI Developers Overcome Communication Challenges in a Multidisciplinary Team: A Case Study
Figure 3 for How AI Developers Overcome Communication Challenges in a Multidisciplinary Team: A Case Study
Figure 4 for How AI Developers Overcome Communication Challenges in a Multidisciplinary Team: A Case Study

The development of AI applications is a multidisciplinary effort, involving multiple roles collaborating with the AI developers, an umbrella term we use to include data scientists and other AI-adjacent roles on the same team. During these collaborations, there is a knowledge mismatch between AI developers, who are skilled in data science, and external stakeholders who are typically not. This difference leads to communication gaps, and the onus falls on AI developers to explain data science concepts to their collaborators. In this paper, we report on a study including analyses of both interviews with AI developers and artifacts they produced for communication. Using the analytic lens of shared mental models, we report on the types of communication gaps that AI developers face, how AI developers communicate across disciplinary and organizational boundaries, and how they simultaneously manage issues regarding trust and expectations.

* 25 pages, 7 figures, 4 tables 
Viaarxiv icon

Expanding Explainability: Towards Social Transparency in AI systems

Jan 12, 2021
Upol Ehsan, Q. Vera Liao, Michael Muller, Mark O. Riedl, Justin D. Weisz

Figure 1 for Expanding Explainability: Towards Social Transparency in AI systems
Figure 2 for Expanding Explainability: Towards Social Transparency in AI systems
Figure 3 for Expanding Explainability: Towards Social Transparency in AI systems

As AI-powered systems increasingly mediate consequential decision-making, their explainability is critical for end-users to take informed and accountable actions. Explanations in human-human interactions are socially-situated. AI systems are often socio-organizationally embedded. However, Explainable AI (XAI) approaches have been predominantly algorithm-centered. We take a developmental step towards socially-situated XAI by introducing and exploring Social Transparency (ST), a sociotechnically informed perspective that incorporates the socio-organizational context into explaining AI-mediated decision-making. To explore ST conceptually, we conducted interviews with 29 AI users and practitioners grounded in a speculative design scenario. We suggested constitutive design elements of ST and developed a conceptual framework to unpack ST's effect and implications at the technical, decision-making, and organizational level. The framework showcases how ST can potentially calibrate trust in AI, improve decision-making, facilitate organizational collective actions, and cultivate holistic explainability. Our work contributes to the discourse of Human-Centered XAI by expanding the design space of XAI.

* Accepted to CHI2021 
Viaarxiv icon

How Much Automation Does a Data Scientist Want?

Jan 07, 2021
Dakuo Wang, Q. Vera Liao, Yunfeng Zhang, Udayan Khurana, Horst Samulowitz, Soya Park, Michael Muller, Lisa Amini

Figure 1 for How Much Automation Does a Data Scientist Want?
Figure 2 for How Much Automation Does a Data Scientist Want?
Figure 3 for How Much Automation Does a Data Scientist Want?
Figure 4 for How Much Automation Does a Data Scientist Want?

Data science and machine learning (DS/ML) are at the heart of the recent advancements of many Artificial Intelligence (AI) applications. There is an active research thread in AI, \autoai, that aims to develop systems for automating end-to-end the DS/ML Lifecycle. However, do DS and ML workers really want to automate their DS/ML workflow? To answer this question, we first synthesize a human-centered AutoML framework with 6 User Role/Personas, 10 Stages and 43 Sub-Tasks, 5 Levels of Automation, and 5 Types of Explanation, through reviewing research literature and marketing reports. Secondly, we use the framework to guide the design of an online survey study with 217 DS/ML workers who had varying degrees of experience, and different user roles "matching" to our 6 roles/personas. We found that different user personas participated in distinct stages of the lifecycle -- but not all stages. Their desired levels of automation and types of explanation for AutoML also varied significantly depending on the DS/ML stage and the user persona. Based on the survey results, we argue there is no rationale from user needs for complete automation of the end-to-end DS/ML lifecycle. We propose new next steps for user-controlled DS/ML automation.

Viaarxiv icon

How do Data Science Workers Collaborate? Roles, Workflows, and Tools

Jan 26, 2020
Amy X. Zhang, Michael Muller, Dakuo Wang

Figure 1 for How do Data Science Workers Collaborate? Roles, Workflows, and Tools
Figure 2 for How do Data Science Workers Collaborate? Roles, Workflows, and Tools
Figure 3 for How do Data Science Workers Collaborate? Roles, Workflows, and Tools
Figure 4 for How do Data Science Workers Collaborate? Roles, Workflows, and Tools

Today, the prominence of data science within organizations has given rise to teams of data science workers collaborating on extracting insights from data, as opposed to individual data scientists working alone. However, we still lack a deep understanding of how data science workers collaborate in practice. In this work, we conducted an online survey with 183 participants who work in various aspects of data science. We focused on their reported interactions with each other (e.g., managers with engineers) and with different tools (e.g., Jupyter Notebook). We found that data science teams are extremely collaborative and work with a variety of stakeholders and tools during the six common steps of a data science workflow (e.g., clean data and train model). We also found that the collaborative practices workers employ, such as documentation, vary according to the kinds of tools they use. Based on these findings, we discuss design implications for supporting data science team collaborations and future research directions.

* CSCW'2020 
Viaarxiv icon