Alert button
Picture for Rahul Nair

Rahul Nair

Alert button

Ranking Large Language Models without Ground Truth

Add code
Bookmark button
Alert button
Feb 21, 2024
Amit Dhurandhar, Rahul Nair, Moninder Singh, Elizabeth Daly, Karthikeyan Natesan Ramamurthy

Viaarxiv icon

Explaining Knock-on Effects of Bias Mitigation

Add code
Bookmark button
Alert button
Dec 01, 2023
Svetoslav Nizhnichenkov, Rahul Nair, Elizabeth Daly, Brian Mac Namee

Viaarxiv icon

Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification

Add code
Bookmark button
Alert button
Aug 30, 2023
Jasmina Gajcin, James McCarthy, Rahul Nair, Radu Marinescu, Elizabeth Daly, Ivana Dusparic

Figure 1 for Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification
Figure 2 for Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification
Figure 3 for Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification
Figure 4 for Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification
Viaarxiv icon

Co-creating a globally interpretable model with human input

Add code
Bookmark button
Alert button
Jun 23, 2023
Rahul Nair

Figure 1 for Co-creating a globally interpretable model with human input
Figure 2 for Co-creating a globally interpretable model with human input
Figure 3 for Co-creating a globally interpretable model with human input
Figure 4 for Co-creating a globally interpretable model with human input
Viaarxiv icon

Interpretable Differencing of Machine Learning Models

Add code
Bookmark button
Alert button
Jun 13, 2023
Swagatam Haldar, Diptikalyan Saha, Dennis Wei, Rahul Nair, Elizabeth M. Daly

Figure 1 for Interpretable Differencing of Machine Learning Models
Figure 2 for Interpretable Differencing of Machine Learning Models
Figure 3 for Interpretable Differencing of Machine Learning Models
Figure 4 for Interpretable Differencing of Machine Learning Models
Viaarxiv icon

AutoDOViz: Human-Centered Automation for Decision Optimization

Add code
Bookmark button
Alert button
Feb 19, 2023
Daniel Karl I. Weidele, Shazia Afzal, Abel N. Valente, Cole Makuch, Owen Cornec, Long Vu, Dharmashankar Subramanian, Werner Geyer, Rahul Nair, Inge Vejsbjerg, Radu Marinescu, Paulito Palmes, Elizabeth M. Daly, Loraine Franke, Daniel Haehn

Figure 1 for AutoDOViz: Human-Centered Automation for Decision Optimization
Figure 2 for AutoDOViz: Human-Centered Automation for Decision Optimization
Figure 3 for AutoDOViz: Human-Centered Automation for Decision Optimization
Figure 4 for AutoDOViz: Human-Centered Automation for Decision Optimization
Viaarxiv icon

On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach

Add code
Bookmark button
Alert button
Nov 02, 2022
Dennis Wei, Rahul Nair, Amit Dhurandhar, Kush R. Varshney, Elizabeth M. Daly, Moninder Singh

Figure 1 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 2 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 3 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 4 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Viaarxiv icon

Boolean Decision Rules for Reinforcement Learning Policy Summarisation

Add code
Bookmark button
Alert button
Jul 18, 2022
James McCarthy, Rahul Nair, Elizabeth Daly, Radu Marinescu, Ivana Dusparic

Figure 1 for Boolean Decision Rules for Reinforcement Learning Policy Summarisation
Figure 2 for Boolean Decision Rules for Reinforcement Learning Policy Summarisation
Figure 3 for Boolean Decision Rules for Reinforcement Learning Policy Summarisation
Figure 4 for Boolean Decision Rules for Reinforcement Learning Policy Summarisation
Viaarxiv icon

User Driven Model Adjustment via Boolean Rule Explanations

Add code
Bookmark button
Alert button
Mar 28, 2022
Elizabeth M. Daly, Massimiliano Mattetti, Öznur Alkan, Rahul Nair

Figure 1 for User Driven Model Adjustment via Boolean Rule Explanations
Figure 2 for User Driven Model Adjustment via Boolean Rule Explanations
Figure 3 for User Driven Model Adjustment via Boolean Rule Explanations
Figure 4 for User Driven Model Adjustment via Boolean Rule Explanations
Viaarxiv icon

FROTE: Feedback Rule-Driven Oversampling for Editing Models

Add code
Bookmark button
Alert button
Jan 06, 2022
Öznur Alkan, Dennis Wei, Massimiliano Mattetti, Rahul Nair, Elizabeth M. Daly, Diptikalyan Saha

Figure 1 for FROTE: Feedback Rule-Driven Oversampling for Editing Models
Figure 2 for FROTE: Feedback Rule-Driven Oversampling for Editing Models
Figure 3 for FROTE: Feedback Rule-Driven Oversampling for Editing Models
Figure 4 for FROTE: Feedback Rule-Driven Oversampling for Editing Models
Viaarxiv icon