Alert button
Picture for Stephanie Houde

Stephanie Houde

Alert button

A Case Study in Engineering a Conversational Programming Assistant's Persona

Jan 13, 2023
Steven I. Ross, Michael Muller, Fernando Martinez, Stephanie Houde, Justin D. Weisz

Viaarxiv icon

Toward General Design Principles for Generative AI Applications

Jan 13, 2023
Justin D. Weisz, Michael Muller, Jessica He, Stephanie Houde

Figure 1 for Toward General Design Principles for Generative AI Applications
Viaarxiv icon

Investigating Explainability of Generative AI for Code through Scenario-based Design

Feb 10, 2022
Jiao Sun, Q. Vera Liao, Michael Muller, Mayank Agarwal, Stephanie Houde, Kartik Talamadupula, Justin D. Weisz

Figure 1 for Investigating Explainability of Generative AI for Code through Scenario-based Design
Figure 2 for Investigating Explainability of Generative AI for Code through Scenario-based Design
Figure 3 for Investigating Explainability of Generative AI for Code through Scenario-based Design
Figure 4 for Investigating Explainability of Generative AI for Code through Scenario-based Design
Viaarxiv icon

Using Document Similarity Methods to create Parallel Datasets for Code Translation

Oct 11, 2021
Mayank Agarwal, Kartik Talamadupula, Fernando Martinez, Stephanie Houde, Michael Muller, John Richards, Steven I Ross, Justin D. Weisz

Figure 1 for Using Document Similarity Methods to create Parallel Datasets for Code Translation
Figure 2 for Using Document Similarity Methods to create Parallel Datasets for Code Translation
Figure 3 for Using Document Similarity Methods to create Parallel Datasets for Code Translation
Figure 4 for Using Document Similarity Methods to create Parallel Datasets for Code Translation
Viaarxiv icon

AI Explainability 360: Impact and Design

Sep 24, 2021
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang

Figure 1 for AI Explainability 360: Impact and Design
Figure 2 for AI Explainability 360: Impact and Design
Figure 3 for AI Explainability 360: Impact and Design
Figure 4 for AI Explainability 360: Impact and Design
Viaarxiv icon

Towards evaluating and eliciting high-quality documentation for intelligent systems

Nov 17, 2020
David Piorkowski, Daniel González, John Richards, Stephanie Houde

Figure 1 for Towards evaluating and eliciting high-quality documentation for intelligent systems
Figure 2 for Towards evaluating and eliciting high-quality documentation for intelligent systems
Figure 3 for Towards evaluating and eliciting high-quality documentation for intelligent systems
Figure 4 for Towards evaluating and eliciting high-quality documentation for intelligent systems
Viaarxiv icon

A Methodology for Creating AI FactSheets

Jun 28, 2020
John Richards, David Piorkowski, Michael Hind, Stephanie Houde, Aleksandra Mojsilović

Figure 1 for A Methodology for Creating AI FactSheets
Figure 2 for A Methodology for Creating AI FactSheets
Figure 3 for A Methodology for Creating AI FactSheets
Figure 4 for A Methodology for Creating AI FactSheets
Viaarxiv icon

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

Sep 14, 2019
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang

Figure 1 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 2 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 3 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 4 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Viaarxiv icon