Picture for Amit Dhurandhar

Amit Dhurandhar

Deep Generative Sampling in the Dual Divergence Space: A Data-efficient & Interpretative Approach for Generative AI

Add code
Apr 10, 2024
Figure 1 for Deep Generative Sampling in the Dual Divergence Space: A Data-efficient & Interpretative Approach for Generative AI
Figure 2 for Deep Generative Sampling in the Dual Divergence Space: A Data-efficient & Interpretative Approach for Generative AI
Figure 3 for Deep Generative Sampling in the Dual Divergence Space: A Data-efficient & Interpretative Approach for Generative AI
Figure 4 for Deep Generative Sampling in the Dual Divergence Space: A Data-efficient & Interpretative Approach for Generative AI
Viaarxiv icon

Multi-Level Explanations for Generative Language Models

Add code
Mar 21, 2024
Figure 1 for Multi-Level Explanations for Generative Language Models
Figure 2 for Multi-Level Explanations for Generative Language Models
Figure 3 for Multi-Level Explanations for Generative Language Models
Figure 4 for Multi-Level Explanations for Generative Language Models
Viaarxiv icon

Ranking Large Language Models without Ground Truth

Add code
Feb 21, 2024
Figure 1 for Ranking Large Language Models without Ground Truth
Figure 2 for Ranking Large Language Models without Ground Truth
Figure 3 for Ranking Large Language Models without Ground Truth
Figure 4 for Ranking Large Language Models without Ground Truth
Viaarxiv icon

Trust Regions for Explanations via Black-Box Probabilistic Certification

Add code
Feb 21, 2024
Figure 1 for Trust Regions for Explanations via Black-Box Probabilistic Certification
Figure 2 for Trust Regions for Explanations via Black-Box Probabilistic Certification
Figure 3 for Trust Regions for Explanations via Black-Box Probabilistic Certification
Figure 4 for Trust Regions for Explanations via Black-Box Probabilistic Certification
Viaarxiv icon

Spectral Adversarial MixUp for Few-Shot Unsupervised Domain Adaptation

Add code
Sep 03, 2023
Viaarxiv icon

When Neural Networks Fail to Generalize? A Model Sensitivity Perspective

Add code
Dec 01, 2022
Viaarxiv icon

On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach

Add code
Nov 02, 2022
Figure 1 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 2 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 3 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 4 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Viaarxiv icon

PainPoints: A Framework for Language-based Detection of Chronic Pain and Expert-Collaborative Text-Summarization

Add code
Sep 14, 2022
Figure 1 for PainPoints: A Framework for Language-based Detection of Chronic Pain and Expert-Collaborative Text-Summarization
Figure 2 for PainPoints: A Framework for Language-based Detection of Chronic Pain and Expert-Collaborative Text-Summarization
Figure 3 for PainPoints: A Framework for Language-based Detection of Chronic Pain and Expert-Collaborative Text-Summarization
Figure 4 for PainPoints: A Framework for Language-based Detection of Chronic Pain and Expert-Collaborative Text-Summarization
Viaarxiv icon

Atomist or Holist? A Diagnosis and Vision for More Productive Interdisciplinary AI Ethics Dialogue

Add code
Sep 01, 2022
Figure 1 for Atomist or Holist? A Diagnosis and Vision for More Productive Interdisciplinary AI Ethics Dialogue
Figure 2 for Atomist or Holist? A Diagnosis and Vision for More Productive Interdisciplinary AI Ethics Dialogue
Figure 3 for Atomist or Holist? A Diagnosis and Vision for More Productive Interdisciplinary AI Ethics Dialogue
Viaarxiv icon

Anomaly Attribution with Likelihood Compensation

Add code
Aug 23, 2022
Figure 1 for Anomaly Attribution with Likelihood Compensation
Figure 2 for Anomaly Attribution with Likelihood Compensation
Figure 3 for Anomaly Attribution with Likelihood Compensation
Figure 4 for Anomaly Attribution with Likelihood Compensation
Viaarxiv icon