Picture for Zana Buçinca

Zana Buçinca

Users Mispredict Their Own Preferences for AI Writing Assistance

Add code
Jan 08, 2026
Viaarxiv icon

Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills

Add code
Oct 05, 2024
Figure 1 for Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Figure 2 for Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Figure 3 for Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Figure 4 for Contrastive Explanations That Anticipate Human Misconceptions Can Improve Human Decision-Making Skills
Viaarxiv icon

Learning Interpretable Fair Representations

Add code
Jun 24, 2024
Figure 1 for Learning Interpretable Fair Representations
Figure 2 for Learning Interpretable Fair Representations
Figure 3 for Learning Interpretable Fair Representations
Figure 4 for Learning Interpretable Fair Representations
Viaarxiv icon

Towards Optimizing Human-Centric Objectives in AI-Assisted Decision-Making With Offline Reinforcement Learning

Add code
Mar 09, 2024
Figure 1 for Towards Optimizing Human-Centric Objectives in AI-Assisted Decision-Making With Offline Reinforcement Learning
Figure 2 for Towards Optimizing Human-Centric Objectives in AI-Assisted Decision-Making With Offline Reinforcement Learning
Figure 3 for Towards Optimizing Human-Centric Objectives in AI-Assisted Decision-Making With Offline Reinforcement Learning
Figure 4 for Towards Optimizing Human-Centric Objectives in AI-Assisted Decision-Making With Offline Reinforcement Learning
Viaarxiv icon

Adaptive interventions for both accuracy and time in AI-assisted human decision making

Add code
Jun 12, 2023
Figure 1 for Adaptive interventions for both accuracy and time in AI-assisted human decision making
Figure 2 for Adaptive interventions for both accuracy and time in AI-assisted human decision making
Figure 3 for Adaptive interventions for both accuracy and time in AI-assisted human decision making
Figure 4 for Adaptive interventions for both accuracy and time in AI-assisted human decision making
Viaarxiv icon

How Different Groups Prioritize Ethical Values for Responsible AI

Add code
May 16, 2022
Figure 1 for How Different Groups Prioritize Ethical Values for Responsible AI
Figure 2 for How Different Groups Prioritize Ethical Values for Responsible AI
Figure 3 for How Different Groups Prioritize Ethical Values for Responsible AI
Figure 4 for How Different Groups Prioritize Ethical Values for Responsible AI
Viaarxiv icon

To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making

Add code
Feb 19, 2021
Figure 1 for To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making
Figure 2 for To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making
Figure 3 for To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making
Figure 4 for To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making
Viaarxiv icon

Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems

Add code
Jan 22, 2020
Figure 1 for Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems
Figure 2 for Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems
Figure 3 for Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems
Figure 4 for Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems
Viaarxiv icon