Picture for Krishnaram Kenthapadi

Krishnaram Kenthapadi

JEDA: Query-Free Clinical Order Search from Ambient Dialogues

Add code
Oct 16, 2025
Viaarxiv icon

Permissioned LLMs: Enforcing Access Control in Large Language Models

Add code
May 28, 2025
Figure 1 for Permissioned LLMs: Enforcing Access Control in Large Language Models
Figure 2 for Permissioned LLMs: Enforcing Access Control in Large Language Models
Figure 3 for Permissioned LLMs: Enforcing Access Control in Large Language Models
Figure 4 for Permissioned LLMs: Enforcing Access Control in Large Language Models
Viaarxiv icon

RedactOR: An LLM-Powered Framework for Automatic Clinical Data De-Identification

Add code
May 23, 2025
Viaarxiv icon

Towards Trustworthy Retrieval Augmented Generation for Large Language Models: A Survey

Add code
Feb 08, 2025
Viaarxiv icon

Measuring Distributional Shifts in Text: The Advantage of Language Model-Based Embeddings

Add code
Dec 04, 2023
Figure 1 for Measuring Distributional Shifts in Text: The Advantage of Language Model-Based Embeddings
Figure 2 for Measuring Distributional Shifts in Text: The Advantage of Language Model-Based Embeddings
Figure 3 for Measuring Distributional Shifts in Text: The Advantage of Language Model-Based Embeddings
Figure 4 for Measuring Distributional Shifts in Text: The Advantage of Language Model-Based Embeddings
Viaarxiv icon

Designing Closed-Loop Models for Task Allocation

Add code
May 31, 2023
Viaarxiv icon

Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users

Add code
Jul 06, 2022
Figure 1 for Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users
Figure 2 for Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users
Figure 3 for Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users
Viaarxiv icon

Visual Auditor: Interactive Visualization for Detection and Summarization of Model Biases

Add code
Jun 25, 2022
Figure 1 for Visual Auditor: Interactive Visualization for Detection and Summarization of Model Biases
Figure 2 for Visual Auditor: Interactive Visualization for Detection and Summarization of Model Biases
Viaarxiv icon

A Human-Centric Take on Model Monitoring

Add code
Jun 06, 2022
Figure 1 for A Human-Centric Take on Model Monitoring
Viaarxiv icon

Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks

Add code
Apr 09, 2022
Figure 1 for Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks
Figure 2 for Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks
Figure 3 for Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks
Figure 4 for Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks
Viaarxiv icon