Alert button
Picture for Ronny Luss

Ronny Luss

Alert button

Multi-Level Explanations for Generative Language Models

Add code
Bookmark button
Alert button
Mar 21, 2024
Lucas Monteiro Paes, Dennis Wei, Hyo Jin Do, Hendrik Strobelt, Ronny Luss, Amit Dhurandhar, Manish Nagireddy, Karthikeyan Natesan Ramamurthy, Prasanna Sattigeri, Werner Geyer, Soumya Ghosh

Figure 1 for Multi-Level Explanations for Generative Language Models
Figure 2 for Multi-Level Explanations for Generative Language Models
Figure 3 for Multi-Level Explanations for Generative Language Models
Figure 4 for Multi-Level Explanations for Generative Language Models
Viaarxiv icon

Contextual Moral Value Alignment Through Context-Based Aggregation

Add code
Bookmark button
Alert button
Mar 19, 2024
Pierre Dognin, Jesus Rios, Ronny Luss, Inkit Padhi, Matthew D Riemer, Miao Liu, Prasanna Sattigeri, Manish Nagireddy, Kush R. Varshney, Djallel Bouneffouf

Figure 1 for Contextual Moral Value Alignment Through Context-Based Aggregation
Figure 2 for Contextual Moral Value Alignment Through Context-Based Aggregation
Figure 3 for Contextual Moral Value Alignment Through Context-Based Aggregation
Viaarxiv icon

Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI

Add code
Bookmark button
Alert button
Jun 22, 2022
Q. Vera Liao, Yunfeng Zhang, Ronny Luss, Finale Doshi-Velez, Amit Dhurandhar

Figure 1 for Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI
Figure 2 for Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI
Figure 3 for Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI
Figure 4 for Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI
Viaarxiv icon

Local Explanations for Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 08, 2022
Ronny Luss, Amit Dhurandhar, Miao Liu

Figure 1 for Local Explanations for Reinforcement Learning
Figure 2 for Local Explanations for Reinforcement Learning
Figure 3 for Local Explanations for Reinforcement Learning
Figure 4 for Local Explanations for Reinforcement Learning
Viaarxiv icon

Auto-Transfer: Learning to Route Transferrable Representations

Add code
Bookmark button
Alert button
Feb 04, 2022
Keerthiram Murugesan, Vijay Sadashivaiah, Ronny Luss, Karthikeyan Shanmugam, Pin-Yu Chen, Amit Dhurandhar

Figure 1 for Auto-Transfer: Learning to Route Transferrable Representations
Figure 2 for Auto-Transfer: Learning to Route Transferrable Representations
Figure 3 for Auto-Transfer: Learning to Route Transferrable Representations
Figure 4 for Auto-Transfer: Learning to Route Transferrable Representations
Viaarxiv icon

AI Explainability 360: Impact and Design

Add code
Bookmark button
Alert button
Sep 24, 2021
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang

Figure 1 for AI Explainability 360: Impact and Design
Figure 2 for AI Explainability 360: Impact and Design
Figure 3 for AI Explainability 360: Impact and Design
Figure 4 for AI Explainability 360: Impact and Design
Viaarxiv icon

Let the CAT out of the bag: Contrastive Attributed explanations for Text

Add code
Bookmark button
Alert button
Sep 16, 2021
Saneem Chemmengath, Amar Prakash Azad, Ronny Luss, Amit Dhurandhar

Figure 1 for Let the CAT out of the bag: Contrastive Attributed explanations for Text
Figure 2 for Let the CAT out of the bag: Contrastive Attributed explanations for Text
Figure 3 for Let the CAT out of the bag: Contrastive Attributed explanations for Text
Figure 4 for Let the CAT out of the bag: Contrastive Attributed explanations for Text
Viaarxiv icon

Towards Better Model Understanding with Path-Sufficient Explanations

Add code
Bookmark button
Alert button
Sep 13, 2021
Ronny Luss, Amit Dhurandhar

Figure 1 for Towards Better Model Understanding with Path-Sufficient Explanations
Figure 2 for Towards Better Model Understanding with Path-Sufficient Explanations
Figure 3 for Towards Better Model Understanding with Path-Sufficient Explanations
Figure 4 for Towards Better Model Understanding with Path-Sufficient Explanations
Viaarxiv icon

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

Add code
Bookmark button
Alert button
Sep 14, 2019
Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang

Figure 1 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 2 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 3 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 4 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Viaarxiv icon