Alert button
Picture for Rahul Nair

Rahul Nair

Alert button

Explaining Knock-on Effects of Bias Mitigation

Dec 01, 2023
Svetoslav Nizhnichenkov, Rahul Nair, Elizabeth Daly, Brian Mac Namee

Viaarxiv icon

Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification

Aug 30, 2023
Jasmina Gajcin, James McCarthy, Rahul Nair, Radu Marinescu, Elizabeth Daly, Ivana Dusparic

Figure 1 for Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification
Figure 2 for Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification
Figure 3 for Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification
Figure 4 for Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification
Viaarxiv icon

Co-creating a globally interpretable model with human input

Jun 23, 2023
Rahul Nair

Figure 1 for Co-creating a globally interpretable model with human input
Figure 2 for Co-creating a globally interpretable model with human input
Figure 3 for Co-creating a globally interpretable model with human input
Figure 4 for Co-creating a globally interpretable model with human input
Viaarxiv icon

Interpretable Differencing of Machine Learning Models

Jun 13, 2023
Swagatam Haldar, Diptikalyan Saha, Dennis Wei, Rahul Nair, Elizabeth M. Daly

Figure 1 for Interpretable Differencing of Machine Learning Models
Figure 2 for Interpretable Differencing of Machine Learning Models
Figure 3 for Interpretable Differencing of Machine Learning Models
Figure 4 for Interpretable Differencing of Machine Learning Models
Viaarxiv icon

AutoDOViz: Human-Centered Automation for Decision Optimization

Feb 19, 2023
Daniel Karl I. Weidele, Shazia Afzal, Abel N. Valente, Cole Makuch, Owen Cornec, Long Vu, Dharmashankar Subramanian, Werner Geyer, Rahul Nair, Inge Vejsbjerg, Radu Marinescu, Paulito Palmes, Elizabeth M. Daly, Loraine Franke, Daniel Haehn

Figure 1 for AutoDOViz: Human-Centered Automation for Decision Optimization
Figure 2 for AutoDOViz: Human-Centered Automation for Decision Optimization
Figure 3 for AutoDOViz: Human-Centered Automation for Decision Optimization
Figure 4 for AutoDOViz: Human-Centered Automation for Decision Optimization
Viaarxiv icon

On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach

Nov 02, 2022
Dennis Wei, Rahul Nair, Amit Dhurandhar, Kush R. Varshney, Elizabeth M. Daly, Moninder Singh

Figure 1 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 2 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 3 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 4 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Viaarxiv icon

Boolean Decision Rules for Reinforcement Learning Policy Summarisation

Jul 18, 2022
James McCarthy, Rahul Nair, Elizabeth Daly, Radu Marinescu, Ivana Dusparic

Figure 1 for Boolean Decision Rules for Reinforcement Learning Policy Summarisation
Figure 2 for Boolean Decision Rules for Reinforcement Learning Policy Summarisation
Figure 3 for Boolean Decision Rules for Reinforcement Learning Policy Summarisation
Figure 4 for Boolean Decision Rules for Reinforcement Learning Policy Summarisation
Viaarxiv icon

User Driven Model Adjustment via Boolean Rule Explanations

Mar 28, 2022
Elizabeth M. Daly, Massimiliano Mattetti, Öznur Alkan, Rahul Nair

Figure 1 for User Driven Model Adjustment via Boolean Rule Explanations
Figure 2 for User Driven Model Adjustment via Boolean Rule Explanations
Figure 3 for User Driven Model Adjustment via Boolean Rule Explanations
Figure 4 for User Driven Model Adjustment via Boolean Rule Explanations
Viaarxiv icon

FROTE: Feedback Rule-Driven Oversampling for Editing Models

Jan 06, 2022
Öznur Alkan, Dennis Wei, Massimiliano Mattetti, Rahul Nair, Elizabeth M. Daly, Diptikalyan Saha

Figure 1 for FROTE: Feedback Rule-Driven Oversampling for Editing Models
Figure 2 for FROTE: Feedback Rule-Driven Oversampling for Editing Models
Figure 3 for FROTE: Feedback Rule-Driven Oversampling for Editing Models
Figure 4 for FROTE: Feedback Rule-Driven Oversampling for Editing Models
Viaarxiv icon

Contrastive Explanations for Comparing Preferences of Reinforcement Learning Agents

Dec 17, 2021
Jasmina Gajcin, Rahul Nair, Tejaswini Pedapati, Radu Marinescu, Elizabeth Daly, Ivana Dusparic

Figure 1 for Contrastive Explanations for Comparing Preferences of Reinforcement Learning Agents
Figure 2 for Contrastive Explanations for Comparing Preferences of Reinforcement Learning Agents
Figure 3 for Contrastive Explanations for Comparing Preferences of Reinforcement Learning Agents
Figure 4 for Contrastive Explanations for Comparing Preferences of Reinforcement Learning Agents
Viaarxiv icon