Alert button
Picture for Emma Kallina

Emma Kallina

Alert button

FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

Jul 28, 2023
Matthew Barker, Emma Kallina, Dhananjay Ashok, Katherine M. Collins, Ashley Casovan, Adrian Weller, Ameet Talwalkar, Valerie Chen, Umang Bhatt

Figure 1 for FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines
Figure 2 for FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines
Figure 3 for FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines
Figure 4 for FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

Even though machine learning (ML) pipelines affect an increasing array of stakeholders, there is little work on how input from stakeholders is recorded and incorporated. We propose FeedbackLogs, addenda to existing documentation of ML pipelines, to track the input of multiple stakeholders. Each log records important details about the feedback collection process, the feedback itself, and how the feedback is used to update the ML pipeline. In this paper, we introduce and formalise a process for collecting a FeedbackLog. We also provide concrete use cases where FeedbackLogs can be employed as evidence for algorithmic auditing and as a tool to record updates based on stakeholder feedback.

Viaarxiv icon

Learning Personalized Decision Support Policies

Apr 13, 2023
Umang Bhatt, Valerie Chen, Katherine M. Collins, Parameswaran Kamalaruban, Emma Kallina, Adrian Weller, Ameet Talwalkar

Figure 1 for Learning Personalized Decision Support Policies
Figure 2 for Learning Personalized Decision Support Policies
Figure 3 for Learning Personalized Decision Support Policies
Figure 4 for Learning Personalized Decision Support Policies

Individual human decision-makers may benefit from different forms of support to improve decision outcomes. However, a key question is which form of support will lead to accurate decisions at a low cost. In this work, we propose learning a decision support policy that, for a given input, chooses which form of support, if any, to provide. We consider decision-makers for whom we have no prior information and formalize learning their respective policies as a multi-objective optimization problem that trades off accuracy and cost. Using techniques from stochastic contextual bandits, we propose $\texttt{THREAD}$, an online algorithm to personalize a decision support policy for each decision-maker, and devise a hyper-parameter tuning strategy to identify a cost-performance trade-off using simulated human behavior. We provide computational experiments to demonstrate the benefits of $\texttt{THREAD}$ compared to offline baselines. We then introduce $\texttt{Modiste}$, an interactive tool that provides $\texttt{THREAD}$ with an interface. We conduct human subject experiments to show how $\texttt{Modiste}$ learns policies personalized to each decision-maker and discuss the nuances of learning decision support policies online for real users.

* Working paper 
Viaarxiv icon