Alert button
Picture for Elizabeth M. Daly

Elizabeth M. Daly

Alert button

Language Models in Dialogue: Conversational Maxims for Human-AI Interactions

Add code
Bookmark button
Alert button
Mar 22, 2024
Erik Miehling, Manish Nagireddy, Prasanna Sattigeri, Elizabeth M. Daly, David Piorkowski, John T. Richards

Viaarxiv icon

Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations

Add code
Bookmark button
Alert button
Mar 09, 2024
Swapnaja Achintalwar, Adriana Alvarado Garcia, Ateret Anaby-Tavor, Ioana Baldini, Sara E. Berger, Bishwaranjan Bhattacharjee, Djallel Bouneffouf, Subhajit Chaudhury, Pin-Yu Chen, Lamogha Chiazor, Elizabeth M. Daly, Rogério Abreu de Paula, Pierre Dognin, Eitan Farchi, Soumya Ghosh, Michael Hind, Raya Horesh, George Kour, Ja Young Lee, Erik Miehling, Keerthiram Murugesan, Manish Nagireddy, Inkit Padhi, David Piorkowski, Ambrish Rawat, Orna Raz, Prasanna Sattigeri, Hendrik Strobelt, Sarathkrishna Swaminathan, Christoph Tillmann, Aashka Trivedi, Kush R. Varshney, Dennis Wei, Shalisha Witherspooon, Marcel Zalmanovici

Figure 1 for Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Figure 2 for Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Figure 3 for Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Figure 4 for Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Viaarxiv icon

Interpretable Differencing of Machine Learning Models

Add code
Bookmark button
Alert button
Jun 13, 2023
Swagatam Haldar, Diptikalyan Saha, Dennis Wei, Rahul Nair, Elizabeth M. Daly

Figure 1 for Interpretable Differencing of Machine Learning Models
Figure 2 for Interpretable Differencing of Machine Learning Models
Figure 3 for Interpretable Differencing of Machine Learning Models
Figure 4 for Interpretable Differencing of Machine Learning Models
Viaarxiv icon

AutoDOViz: Human-Centered Automation for Decision Optimization

Add code
Bookmark button
Alert button
Feb 19, 2023
Daniel Karl I. Weidele, Shazia Afzal, Abel N. Valente, Cole Makuch, Owen Cornec, Long Vu, Dharmashankar Subramanian, Werner Geyer, Rahul Nair, Inge Vejsbjerg, Radu Marinescu, Paulito Palmes, Elizabeth M. Daly, Loraine Franke, Daniel Haehn

Figure 1 for AutoDOViz: Human-Centered Automation for Decision Optimization
Figure 2 for AutoDOViz: Human-Centered Automation for Decision Optimization
Figure 3 for AutoDOViz: Human-Centered Automation for Decision Optimization
Figure 4 for AutoDOViz: Human-Centered Automation for Decision Optimization
Viaarxiv icon

On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach

Add code
Bookmark button
Alert button
Nov 02, 2022
Dennis Wei, Rahul Nair, Amit Dhurandhar, Kush R. Varshney, Elizabeth M. Daly, Moninder Singh

Figure 1 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 2 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 3 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 4 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Viaarxiv icon

User Driven Model Adjustment via Boolean Rule Explanations

Add code
Bookmark button
Alert button
Mar 28, 2022
Elizabeth M. Daly, Massimiliano Mattetti, Öznur Alkan, Rahul Nair

Figure 1 for User Driven Model Adjustment via Boolean Rule Explanations
Figure 2 for User Driven Model Adjustment via Boolean Rule Explanations
Figure 3 for User Driven Model Adjustment via Boolean Rule Explanations
Figure 4 for User Driven Model Adjustment via Boolean Rule Explanations
Viaarxiv icon

FROTE: Feedback Rule-Driven Oversampling for Editing Models

Add code
Bookmark button
Alert button
Jan 06, 2022
Öznur Alkan, Dennis Wei, Massimiliano Mattetti, Rahul Nair, Elizabeth M. Daly, Diptikalyan Saha

Figure 1 for FROTE: Feedback Rule-Driven Oversampling for Editing Models
Figure 2 for FROTE: Feedback Rule-Driven Oversampling for Editing Models
Figure 3 for FROTE: Feedback Rule-Driven Oversampling for Editing Models
Figure 4 for FROTE: Feedback Rule-Driven Oversampling for Editing Models
Viaarxiv icon