Picture for Michael Hind

Michael Hind

Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations

Add code
Mar 09, 2024
Figure 1 for Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Figure 2 for Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Figure 3 for Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Figure 4 for Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Viaarxiv icon

Quantitative AI Risk Assessments: Opportunities and Challenges

Add code
Sep 13, 2022
Viaarxiv icon

Evaluating a Methodology for Increasing AI Transparency: A Case Study

Add code
Jan 24, 2022
Figure 1 for Evaluating a Methodology for Increasing AI Transparency: A Case Study
Figure 2 for Evaluating a Methodology for Increasing AI Transparency: A Case Study
Figure 3 for Evaluating a Methodology for Increasing AI Transparency: A Case Study
Figure 4 for Evaluating a Methodology for Increasing AI Transparency: A Case Study
Viaarxiv icon

AI Explainability 360: Impact and Design

Add code
Sep 24, 2021
Figure 1 for AI Explainability 360: Impact and Design
Figure 2 for AI Explainability 360: Impact and Design
Figure 3 for AI Explainability 360: Impact and Design
Figure 4 for AI Explainability 360: Impact and Design
Viaarxiv icon

A Methodology for Creating AI FactSheets

Add code
Jun 28, 2020
Figure 1 for A Methodology for Creating AI FactSheets
Figure 2 for A Methodology for Creating AI FactSheets
Figure 3 for A Methodology for Creating AI FactSheets
Figure 4 for A Methodology for Creating AI FactSheets
Viaarxiv icon

Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness

Add code
Jan 13, 2020
Figure 1 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Figure 2 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Figure 3 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Figure 4 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Viaarxiv icon

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

Add code
Sep 14, 2019
Figure 1 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 2 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 3 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 4 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Viaarxiv icon

Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning

Add code
Jun 05, 2019
Figure 1 for Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning
Figure 2 for Teaching AI to Explain its Decisions Using Embeddings and Multi-Task Learning
Viaarxiv icon

TED: Teaching AI to Explain its Decisions

Add code
Nov 12, 2018
Figure 1 for TED: Teaching AI to Explain its Decisions
Figure 2 for TED: Teaching AI to Explain its Decisions
Figure 3 for TED: Teaching AI to Explain its Decisions
Viaarxiv icon

AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias

Add code
Oct 03, 2018
Figure 1 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 2 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 3 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 4 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Viaarxiv icon