Alert button
Picture for Arvind Satyanarayan

Arvind Satyanarayan

Alert button

What is a Fair Diffusion Model? Designing Generative Text-To-Image Models to Incorporate Various Worldviews

Add code
Bookmark button
Alert button
Sep 18, 2023
Zoe De Simone, Angie Boggust, Arvind Satyanarayan, Ashia Wilson

Viaarxiv icon

VisText: A Benchmark for Semantically Rich Chart Captioning

Add code
Bookmark button
Alert button
Jun 28, 2023
Benny J. Tang, Angie Boggust, Arvind Satyanarayan

Figure 1 for VisText: A Benchmark for Semantically Rich Chart Captioning
Figure 2 for VisText: A Benchmark for Semantically Rich Chart Captioning
Figure 3 for VisText: A Benchmark for Semantically Rich Chart Captioning
Figure 4 for VisText: A Benchmark for Semantically Rich Chart Captioning
Viaarxiv icon

Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods

Add code
Bookmark button
Alert button
Jun 07, 2022
Angie Boggust, Harini Suresh, Hendrik Strobelt, John V. Guttag, Arvind Satyanarayan

Figure 1 for Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods
Figure 2 for Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods
Figure 3 for Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods
Figure 4 for Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods
Viaarxiv icon

Teaching Humans When To Defer to a Classifier via Examplars

Add code
Bookmark button
Alert button
Nov 22, 2021
Hussein Mozannar, Arvind Satyanarayan, David Sontag

Figure 1 for Teaching Humans When To Defer to a Classifier via Examplars
Figure 2 for Teaching Humans When To Defer to a Classifier via Examplars
Figure 3 for Teaching Humans When To Defer to a Classifier via Examplars
Figure 4 for Teaching Humans When To Defer to a Classifier via Examplars
Viaarxiv icon

LMdiff: A Visual Diff Tool to Compare Language Models

Add code
Bookmark button
Alert button
Nov 02, 2021
Hendrik Strobelt, Benjamin Hoover, Arvind Satyanarayan, Sebastian Gehrmann

Figure 1 for LMdiff: A Visual Diff Tool to Compare Language Models
Figure 2 for LMdiff: A Visual Diff Tool to Compare Language Models
Figure 3 for LMdiff: A Visual Diff Tool to Compare Language Models
Figure 4 for LMdiff: A Visual Diff Tool to Compare Language Models
Viaarxiv icon

Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content

Add code
Bookmark button
Alert button
Oct 08, 2021
Alan Lundgard, Arvind Satyanarayan

Figure 1 for Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content
Figure 2 for Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content
Figure 3 for Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content
Figure 4 for Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content
Viaarxiv icon

Shared Interest: Large-Scale Visual Analysis of Model Behavior by Measuring Human-AI Alignment

Add code
Bookmark button
Alert button
Jul 20, 2021
Angie Boggust, Benjamin Hoover, Arvind Satyanarayan, Hendrik Strobelt

Figure 1 for Shared Interest: Large-Scale Visual Analysis of Model Behavior by Measuring Human-AI Alignment
Figure 2 for Shared Interest: Large-Scale Visual Analysis of Model Behavior by Measuring Human-AI Alignment
Figure 3 for Shared Interest: Large-Scale Visual Analysis of Model Behavior by Measuring Human-AI Alignment
Viaarxiv icon

Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs

Add code
Bookmark button
Alert button
Feb 17, 2021
Harini Suresh, Kathleen M. Lewis, John V. Guttag, Arvind Satyanarayan

Figure 1 for Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Figure 2 for Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Figure 3 for Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Figure 4 for Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs
Viaarxiv icon

Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs

Add code
Bookmark button
Alert button
Jan 24, 2021
Harini Suresh, Steven R. Gomez, Kevin K. Nam, Arvind Satyanarayan

Figure 1 for Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Figure 2 for Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Figure 3 for Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Figure 4 for Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Viaarxiv icon

Embedding Comparator: Visualizing Differences in Global Structure and Local Neighborhoods via Small Multiples

Add code
Bookmark button
Alert button
Dec 10, 2019
Angie Boggust, Brandon Carter, Arvind Satyanarayan

Figure 1 for Embedding Comparator: Visualizing Differences in Global Structure and Local Neighborhoods via Small Multiples
Figure 2 for Embedding Comparator: Visualizing Differences in Global Structure and Local Neighborhoods via Small Multiples
Figure 3 for Embedding Comparator: Visualizing Differences in Global Structure and Local Neighborhoods via Small Multiples
Figure 4 for Embedding Comparator: Visualizing Differences in Global Structure and Local Neighborhoods via Small Multiples
Viaarxiv icon