Alert button
Picture for Dennis Wei

Dennis Wei

Alert button

Who Should Predict? Exact Algorithms For Learning to Defer to Humans

Add code
Bookmark button
Alert button
Jan 15, 2023
Hussein Mozannar, Hunter Lang, Dennis Wei, Prasanna Sattigeri, Subhro Das, David Sontag

Figure 1 for Who Should Predict? Exact Algorithms For Learning to Defer to Humans
Figure 2 for Who Should Predict? Exact Algorithms For Learning to Defer to Humans
Figure 3 for Who Should Predict? Exact Algorithms For Learning to Defer to Humans
Figure 4 for Who Should Predict? Exact Algorithms For Learning to Defer to Humans
Viaarxiv icon

On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach

Add code
Bookmark button
Alert button
Nov 02, 2022
Dennis Wei, Rahul Nair, Amit Dhurandhar, Kush R. Varshney, Elizabeth M. Daly, Moninder Singh

Figure 1 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 2 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 3 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 4 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Viaarxiv icon

Downstream Fairness Caveats with Synthetic Healthcare Data

Add code
Bookmark button
Alert button
Mar 09, 2022
Karan Bhanot, Ioana Baldini, Dennis Wei, Jiaming Zeng, Kristin P. Bennett

Figure 1 for Downstream Fairness Caveats with Synthetic Healthcare Data
Figure 2 for Downstream Fairness Caveats with Synthetic Healthcare Data
Figure 3 for Downstream Fairness Caveats with Synthetic Healthcare Data
Figure 4 for Downstream Fairness Caveats with Synthetic Healthcare Data
Viaarxiv icon

Analogies and Feature Attributions for Model Agnostic Explanation of Similarity Learners

Add code
Bookmark button
Alert button
Feb 02, 2022
Karthikeyan Natesan Ramamurthy, Amit Dhurandhar, Dennis Wei, Zaid Bin Tariq

Figure 1 for Analogies and Feature Attributions for Model Agnostic Explanation of Similarity Learners
Figure 2 for Analogies and Feature Attributions for Model Agnostic Explanation of Similarity Learners
Figure 3 for Analogies and Feature Attributions for Model Agnostic Explanation of Similarity Learners
Figure 4 for Analogies and Feature Attributions for Model Agnostic Explanation of Similarity Learners
Viaarxiv icon

FROTE: Feedback Rule-Driven Oversampling for Editing Models

Add code
Bookmark button
Alert button
Jan 06, 2022
Öznur Alkan, Dennis Wei, Massimiliano Mattetti, Rahul Nair, Elizabeth M. Daly, Diptikalyan Saha

Figure 1 for FROTE: Feedback Rule-Driven Oversampling for Editing Models
Figure 2 for FROTE: Feedback Rule-Driven Oversampling for Editing Models
Figure 3 for FROTE: Feedback Rule-Driven Oversampling for Editing Models
Figure 4 for FROTE: Feedback Rule-Driven Oversampling for Editing Models
Viaarxiv icon

Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating Toxic Text Datasets

Add code
Bookmark button
Alert button
Dec 07, 2021
Kofi Arhin, Ioana Baldini, Dennis Wei, Karthikeyan Natesan Ramamurthy, Moninder Singh

Figure 1 for Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating Toxic Text Datasets
Figure 2 for Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating Toxic Text Datasets
Figure 3 for Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating Toxic Text Datasets
Figure 4 for Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating Toxic Text Datasets
Viaarxiv icon

Interpretable and Fair Boolean Rule Sets via Column Generation

Add code
Bookmark button
Alert button
Nov 16, 2021
Connor Lawless, Sanjeeb Dash, Oktay Gunluk, Dennis Wei

Figure 1 for Interpretable and Fair Boolean Rule Sets via Column Generation
Figure 2 for Interpretable and Fair Boolean Rule Sets via Column Generation
Figure 3 for Interpretable and Fair Boolean Rule Sets via Column Generation
Figure 4 for Interpretable and Fair Boolean Rule Sets via Column Generation
Viaarxiv icon

AI Explainability 360: Impact and Design

Add code
Bookmark button
Alert button
Sep 24, 2021
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang

Figure 1 for AI Explainability 360: Impact and Design
Figure 2 for AI Explainability 360: Impact and Design
Figure 3 for AI Explainability 360: Impact and Design
Figure 4 for AI Explainability 360: Impact and Design
Viaarxiv icon

Your fairness may vary: Group fairness of pretrained language models in toxic text classification

Add code
Bookmark button
Alert button
Aug 03, 2021
Ioana Baldini, Dennis Wei, Karthikeyan Natesan Ramamurthy, Mikhail Yurochkin, Moninder Singh

Figure 1 for Your fairness may vary: Group fairness of pretrained language models in toxic text classification
Figure 2 for Your fairness may vary: Group fairness of pretrained language models in toxic text classification
Figure 3 for Your fairness may vary: Group fairness of pretrained language models in toxic text classification
Figure 4 for Your fairness may vary: Group fairness of pretrained language models in toxic text classification
Viaarxiv icon

Treatment Effect Estimation using Invariant Risk Minimization

Add code
Bookmark button
Alert button
Mar 13, 2021
Abhin Shah, Kartik Ahuja, Karthikeyan Shanmugam, Dennis Wei, Kush Varshney, Amit Dhurandhar

Figure 1 for Treatment Effect Estimation using Invariant Risk Minimization
Figure 2 for Treatment Effect Estimation using Invariant Risk Minimization
Figure 3 for Treatment Effect Estimation using Invariant Risk Minimization
Figure 4 for Treatment Effect Estimation using Invariant Risk Minimization
Viaarxiv icon