Picture for Valerie Chen

Valerie Chen

The RealHumanEval: Evaluating Large Language Models' Abilities to Support Programmers

Add code
Apr 03, 2024
Figure 1 for The RealHumanEval: Evaluating Large Language Models' Abilities to Support Programmers
Figure 2 for The RealHumanEval: Evaluating Large Language Models' Abilities to Support Programmers
Figure 3 for The RealHumanEval: Evaluating Large Language Models' Abilities to Support Programmers
Figure 4 for The RealHumanEval: Evaluating Large Language Models' Abilities to Support Programmers
Viaarxiv icon

Do LLMs exhibit human-like response biases? A case study in survey design

Add code
Nov 07, 2023
Figure 1 for Do LLMs exhibit human-like response biases? A case study in survey design
Figure 2 for Do LLMs exhibit human-like response biases? A case study in survey design
Figure 3 for Do LLMs exhibit human-like response biases? A case study in survey design
Figure 4 for Do LLMs exhibit human-like response biases? A case study in survey design
Viaarxiv icon

AdvisingNets: Learning to Distinguish Correct and Wrong Classifications via Nearest-Neighbor Explanations

Add code
Aug 25, 2023
Figure 1 for AdvisingNets: Learning to Distinguish Correct and Wrong Classifications via Nearest-Neighbor Explanations
Figure 2 for AdvisingNets: Learning to Distinguish Correct and Wrong Classifications via Nearest-Neighbor Explanations
Figure 3 for AdvisingNets: Learning to Distinguish Correct and Wrong Classifications via Nearest-Neighbor Explanations
Figure 4 for AdvisingNets: Learning to Distinguish Correct and Wrong Classifications via Nearest-Neighbor Explanations
Viaarxiv icon

FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

Add code
Jul 28, 2023
Figure 1 for FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines
Figure 2 for FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines
Figure 3 for FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines
Figure 4 for FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines
Viaarxiv icon

Learning Personalized Decision Support Policies

Add code
Apr 13, 2023
Figure 1 for Learning Personalized Decision Support Policies
Figure 2 for Learning Personalized Decision Support Policies
Figure 3 for Learning Personalized Decision Support Policies
Figure 4 for Learning Personalized Decision Support Policies
Viaarxiv icon

Assisting Human Decisions in Document Matching

Add code
Feb 16, 2023
Figure 1 for Assisting Human Decisions in Document Matching
Figure 2 for Assisting Human Decisions in Document Matching
Figure 3 for Assisting Human Decisions in Document Matching
Figure 4 for Assisting Human Decisions in Document Matching
Viaarxiv icon

A Case Study on Designing Evaluations of ML Explanations with Simulated User Studies

Add code
Feb 15, 2023
Figure 1 for A Case Study on Designing Evaluations of ML Explanations with Simulated User Studies
Figure 2 for A Case Study on Designing Evaluations of ML Explanations with Simulated User Studies
Figure 3 for A Case Study on Designing Evaluations of ML Explanations with Simulated User Studies
Figure 4 for A Case Study on Designing Evaluations of ML Explanations with Simulated User Studies
Viaarxiv icon

Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations

Add code
Jan 18, 2023
Figure 1 for Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Figure 2 for Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Figure 3 for Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Figure 4 for Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Viaarxiv icon

On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods

Add code
Jun 30, 2022
Figure 1 for On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods
Figure 2 for On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods
Figure 3 for On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods
Figure 4 for On the Importance of Application-Grounded Experimental Design for Evaluating Explainable ML Methods
Viaarxiv icon

Use-Case-Grounded Simulations for Explanation Evaluation

Add code
Jun 05, 2022
Figure 1 for Use-Case-Grounded Simulations for Explanation Evaluation
Figure 2 for Use-Case-Grounded Simulations for Explanation Evaluation
Figure 3 for Use-Case-Grounded Simulations for Explanation Evaluation
Figure 4 for Use-Case-Grounded Simulations for Explanation Evaluation
Viaarxiv icon