Alert button
Picture for John Richards

John Richards

Alert button

Quantitative AI Risk Assessments: Opportunities and Challenges

Add code
Bookmark button
Alert button
Sep 13, 2022
David Piorkowski, Michael Hind, John Richards

Viaarxiv icon

Evaluating a Methodology for Increasing AI Transparency: A Case Study

Add code
Bookmark button
Alert button
Jan 24, 2022
David Piorkowski, John Richards, Michael Hind

Figure 1 for Evaluating a Methodology for Increasing AI Transparency: A Case Study
Figure 2 for Evaluating a Methodology for Increasing AI Transparency: A Case Study
Figure 3 for Evaluating a Methodology for Increasing AI Transparency: A Case Study
Figure 4 for Evaluating a Methodology for Increasing AI Transparency: A Case Study
Viaarxiv icon

Using Document Similarity Methods to create Parallel Datasets for Code Translation

Add code
Bookmark button
Alert button
Oct 11, 2021
Mayank Agarwal, Kartik Talamadupula, Fernando Martinez, Stephanie Houde, Michael Muller, John Richards, Steven I Ross, Justin D. Weisz

Figure 1 for Using Document Similarity Methods to create Parallel Datasets for Code Translation
Figure 2 for Using Document Similarity Methods to create Parallel Datasets for Code Translation
Figure 3 for Using Document Similarity Methods to create Parallel Datasets for Code Translation
Figure 4 for Using Document Similarity Methods to create Parallel Datasets for Code Translation
Viaarxiv icon

AI Explainability 360: Impact and Design

Add code
Bookmark button
Alert button
Sep 24, 2021
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang

Figure 1 for AI Explainability 360: Impact and Design
Figure 2 for AI Explainability 360: Impact and Design
Figure 3 for AI Explainability 360: Impact and Design
Figure 4 for AI Explainability 360: Impact and Design
Viaarxiv icon

Towards evaluating and eliciting high-quality documentation for intelligent systems

Add code
Bookmark button
Alert button
Nov 17, 2020
David Piorkowski, Daniel González, John Richards, Stephanie Houde

Figure 1 for Towards evaluating and eliciting high-quality documentation for intelligent systems
Figure 2 for Towards evaluating and eliciting high-quality documentation for intelligent systems
Figure 3 for Towards evaluating and eliciting high-quality documentation for intelligent systems
Figure 4 for Towards evaluating and eliciting high-quality documentation for intelligent systems
Viaarxiv icon

A Methodology for Creating AI FactSheets

Add code
Bookmark button
Alert button
Jun 28, 2020
John Richards, David Piorkowski, Michael Hind, Stephanie Houde, Aleksandra Mojsilović

Figure 1 for A Methodology for Creating AI FactSheets
Figure 2 for A Methodology for Creating AI FactSheets
Figure 3 for A Methodology for Creating AI FactSheets
Figure 4 for A Methodology for Creating AI FactSheets
Viaarxiv icon

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

Add code
Bookmark button
Alert button
Sep 14, 2019
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang

Figure 1 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 2 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 3 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 4 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Viaarxiv icon

AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias

Add code
Bookmark button
Alert button
Oct 03, 2018
Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush R. Varshney, Yunfeng Zhang

Figure 1 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 2 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 3 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 4 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Viaarxiv icon