Alert button
Picture for Jennifer Wortman Vaughan

Jennifer Wortman Vaughan

Alert button

Open Datasheets: Machine-readable Documentation for Open Datasets and Responsible AI Assessments

Add code
Bookmark button
Alert button
Dec 11, 2023
Anthony Cintron Roman, Jennifer Wortman Vaughan, Valerie See, Steph Ballard, Nicolas Schifano, Jehu Torres, Caleb Robinson, Juan M. Lavista Ferres

Viaarxiv icon

Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment

Add code
Bookmark button
Alert button
Jun 05, 2023
Alina Beygelzimer, Yann N. Dauphin, Percy Liang, Jennifer Wortman Vaughan

Figure 1 for Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment
Figure 2 for Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment
Figure 3 for Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment
Figure 4 for Has the Machine Learning Review Process Become More Arbitrary as the Field Has Grown? The NeurIPS 2021 Consistency Experiment
Viaarxiv icon

AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap

Add code
Bookmark button
Alert button
Jun 02, 2023
Q. Vera Liao, Jennifer Wortman Vaughan

Figure 1 for AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap
Viaarxiv icon

GAM Coach: Towards Interactive and User-centered Algorithmic Recourse

Add code
Bookmark button
Alert button
Mar 01, 2023
Zijie J. Wang, Jennifer Wortman Vaughan, Rich Caruana, Duen Horng Chau

Figure 1 for GAM Coach: Towards Interactive and User-centered Algorithmic Recourse
Figure 2 for GAM Coach: Towards Interactive and User-centered Algorithmic Recourse
Figure 3 for GAM Coach: Towards Interactive and User-centered Algorithmic Recourse
Figure 4 for GAM Coach: Towards Interactive and User-centered Algorithmic Recourse
Viaarxiv icon

Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience

Add code
Bookmark button
Alert button
Feb 21, 2023
Q. Vera Liao, Hariharan Subramonyam, Jennifer Wang, Jennifer Wortman Vaughan

Figure 1 for Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience
Figure 2 for Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience
Figure 3 for Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience
Figure 4 for Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience
Viaarxiv icon

Generation Probabilities Are Not Enough: Exploring the Effectiveness of Uncertainty Highlighting in AI-Powered Code Completions

Add code
Bookmark button
Alert button
Feb 14, 2023
Helena Vasconcelos, Gagan Bansal, Adam Fourney, Q. Vera Liao, Jennifer Wortman Vaughan

Figure 1 for Generation Probabilities Are Not Enough: Exploring the Effectiveness of Uncertainty Highlighting in AI-Powered Code Completions
Figure 2 for Generation Probabilities Are Not Enough: Exploring the Effectiveness of Uncertainty Highlighting in AI-Powered Code Completions
Figure 3 for Generation Probabilities Are Not Enough: Exploring the Effectiveness of Uncertainty Highlighting in AI-Powered Code Completions
Figure 4 for Generation Probabilities Are Not Enough: Exploring the Effectiveness of Uncertainty Highlighting in AI-Powered Code Completions
Viaarxiv icon

Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations

Add code
Bookmark button
Alert button
Jan 18, 2023
Valerie Chen, Q. Vera Liao, Jennifer Wortman Vaughan, Gagan Bansal

Figure 1 for Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Figure 2 for Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Figure 3 for Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Figure 4 for Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Viaarxiv icon

How do Authors' Perceptions of their Papers Compare with Co-authors' Perceptions and Peer-review Decisions?

Add code
Bookmark button
Alert button
Nov 22, 2022
Charvi Rastogi, Ivan Stelmakh, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, Jennifer Wortman Vaughan, Zhenyu Xue, Hal Daumé III, Emma Pierson, Nihar B. Shah

Figure 1 for How do Authors' Perceptions of their Papers Compare with Co-authors' Perceptions and Peer-review Decisions?
Figure 2 for How do Authors' Perceptions of their Papers Compare with Co-authors' Perceptions and Peer-review Decisions?
Figure 3 for How do Authors' Perceptions of their Papers Compare with Co-authors' Perceptions and Peer-review Decisions?
Figure 4 for How do Authors' Perceptions of their Papers Compare with Co-authors' Perceptions and Peer-review Decisions?
Viaarxiv icon

Interpretable Distribution Shift Detection using Optimal Transport

Add code
Bookmark button
Alert button
Aug 04, 2022
Neha Hulkund, Nicolo Fusi, Jennifer Wortman Vaughan, David Alvarez-Melis

Figure 1 for Interpretable Distribution Shift Detection using Optimal Transport
Figure 2 for Interpretable Distribution Shift Detection using Optimal Transport
Figure 3 for Interpretable Distribution Shift Detection using Optimal Transport
Figure 4 for Interpretable Distribution Shift Detection using Optimal Transport
Viaarxiv icon

Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values

Add code
Bookmark button
Alert button
Jun 30, 2022
Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark E. Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana

Figure 1 for Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values
Figure 2 for Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values
Figure 3 for Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values
Figure 4 for Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values
Viaarxiv icon