Picture for Rahul Nair

Rahul Nair

On Efficient and Statistical Quality Estimation for Data Annotation

Add code
May 20, 2024
Viaarxiv icon

Ranking Large Language Models without Ground Truth

Add code
Feb 21, 2024
Figure 1 for Ranking Large Language Models without Ground Truth
Figure 2 for Ranking Large Language Models without Ground Truth
Figure 3 for Ranking Large Language Models without Ground Truth
Figure 4 for Ranking Large Language Models without Ground Truth
Viaarxiv icon

Explaining Knock-on Effects of Bias Mitigation

Add code
Dec 01, 2023
Figure 1 for Explaining Knock-on Effects of Bias Mitigation
Figure 2 for Explaining Knock-on Effects of Bias Mitigation
Figure 3 for Explaining Knock-on Effects of Bias Mitigation
Figure 4 for Explaining Knock-on Effects of Bias Mitigation
Viaarxiv icon

Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification

Add code
Aug 30, 2023
Figure 1 for Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification
Figure 2 for Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification
Figure 3 for Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification
Figure 4 for Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification
Viaarxiv icon

Co-creating a globally interpretable model with human input

Add code
Jun 23, 2023
Figure 1 for Co-creating a globally interpretable model with human input
Figure 2 for Co-creating a globally interpretable model with human input
Figure 3 for Co-creating a globally interpretable model with human input
Figure 4 for Co-creating a globally interpretable model with human input
Viaarxiv icon

Interpretable Differencing of Machine Learning Models

Add code
Jun 13, 2023
Figure 1 for Interpretable Differencing of Machine Learning Models
Figure 2 for Interpretable Differencing of Machine Learning Models
Figure 3 for Interpretable Differencing of Machine Learning Models
Figure 4 for Interpretable Differencing of Machine Learning Models
Viaarxiv icon

AutoDOViz: Human-Centered Automation for Decision Optimization

Add code
Feb 19, 2023
Figure 1 for AutoDOViz: Human-Centered Automation for Decision Optimization
Figure 2 for AutoDOViz: Human-Centered Automation for Decision Optimization
Figure 3 for AutoDOViz: Human-Centered Automation for Decision Optimization
Figure 4 for AutoDOViz: Human-Centered Automation for Decision Optimization
Viaarxiv icon

On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach

Add code
Nov 02, 2022
Figure 1 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 2 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 3 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 4 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Viaarxiv icon

Boolean Decision Rules for Reinforcement Learning Policy Summarisation

Add code
Jul 18, 2022
Figure 1 for Boolean Decision Rules for Reinforcement Learning Policy Summarisation
Figure 2 for Boolean Decision Rules for Reinforcement Learning Policy Summarisation
Figure 3 for Boolean Decision Rules for Reinforcement Learning Policy Summarisation
Figure 4 for Boolean Decision Rules for Reinforcement Learning Policy Summarisation
Viaarxiv icon

User Driven Model Adjustment via Boolean Rule Explanations

Add code
Mar 28, 2022
Figure 1 for User Driven Model Adjustment via Boolean Rule Explanations
Figure 2 for User Driven Model Adjustment via Boolean Rule Explanations
Figure 3 for User Driven Model Adjustment via Boolean Rule Explanations
Figure 4 for User Driven Model Adjustment via Boolean Rule Explanations
Viaarxiv icon