Alert button
Picture for Michael Hind

Michael Hind

Alert button

Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations

Add code
Bookmark button
Alert button
Mar 09, 2024
Swapnaja Achintalwar, Adriana Alvarado Garcia, Ateret Anaby-Tavor, Ioana Baldini, Sara E. Berger, Bishwaranjan Bhattacharjee, Djallel Bouneffouf, Subhajit Chaudhury, Pin-Yu Chen, Lamogha Chiazor, Elizabeth M. Daly, Rogério Abreu de Paula, Pierre Dognin, Eitan Farchi, Soumya Ghosh, Michael Hind, Raya Horesh, George Kour, Ja Young Lee, Erik Miehling, Keerthiram Murugesan, Manish Nagireddy, Inkit Padhi, David Piorkowski, Ambrish Rawat, Orna Raz, Prasanna Sattigeri, Hendrik Strobelt, Sarathkrishna Swaminathan, Christoph Tillmann, Aashka Trivedi, Kush R. Varshney, Dennis Wei, Shalisha Witherspooon, Marcel Zalmanovici

Figure 1 for Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Figure 2 for Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Figure 3 for Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Figure 4 for Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Viaarxiv icon

Quantitative AI Risk Assessments: Opportunities and Challenges

Add code
Bookmark button
Alert button
Sep 13, 2022
David Piorkowski, Michael Hind, John Richards

Viaarxiv icon

Evaluating a Methodology for Increasing AI Transparency: A Case Study

Add code
Bookmark button
Alert button
Jan 24, 2022
David Piorkowski, John Richards, Michael Hind

Figure 1 for Evaluating a Methodology for Increasing AI Transparency: A Case Study
Figure 2 for Evaluating a Methodology for Increasing AI Transparency: A Case Study
Figure 3 for Evaluating a Methodology for Increasing AI Transparency: A Case Study
Figure 4 for Evaluating a Methodology for Increasing AI Transparency: A Case Study
Viaarxiv icon

AI Explainability 360: Impact and Design

Add code
Bookmark button
Alert button
Sep 24, 2021
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang

Figure 1 for AI Explainability 360: Impact and Design
Figure 2 for AI Explainability 360: Impact and Design
Figure 3 for AI Explainability 360: Impact and Design
Figure 4 for AI Explainability 360: Impact and Design
Viaarxiv icon

A Methodology for Creating AI FactSheets

Add code
Bookmark button
Alert button
Jun 28, 2020
John Richards, David Piorkowski, Michael Hind, Stephanie Houde, Aleksandra Mojsilović

Figure 1 for A Methodology for Creating AI FactSheets
Figure 2 for A Methodology for Creating AI FactSheets
Figure 3 for A Methodology for Creating AI FactSheets
Figure 4 for A Methodology for Creating AI FactSheets
Viaarxiv icon

Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness

Add code
Bookmark button
Alert button
Jan 13, 2020
Michael Hind, Dennis Wei, Yunfeng Zhang

Figure 1 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Figure 2 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Figure 3 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Figure 4 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Viaarxiv icon

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

Add code
Bookmark button
Alert button
Sep 14, 2019
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang

Figure 1 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 2 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 3 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 4 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Viaarxiv icon

Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning

Add code
Bookmark button
Alert button
Jun 05, 2019
Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, Aleksandra Mojsilović

Figure 1 for Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning
Figure 2 for Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning
Viaarxiv icon