Picture for Eric Nalisnick

Eric Nalisnick

Approximating Full Conformal Prediction for Neural Network Regression with Gauss-Newton Influence

Add code
Jul 27, 2025
Viaarxiv icon

Are vision language models robust to uncertain inputs?

Add code
May 17, 2025
Viaarxiv icon

Generative Uncertainty in Diffusion Models

Add code
Feb 28, 2025
Figure 1 for Generative Uncertainty in Diffusion Models
Figure 2 for Generative Uncertainty in Diffusion Models
Figure 3 for Generative Uncertainty in Diffusion Models
Figure 4 for Generative Uncertainty in Diffusion Models
Viaarxiv icon

On Calibration in Multi-Distribution Learning

Add code
Dec 18, 2024
Figure 1 for On Calibration in Multi-Distribution Learning
Viaarxiv icon

ELBOing Stein: Variational Bayes with Stein Mixture Inference

Add code
Oct 30, 2024
Figure 1 for ELBOing Stein: Variational Bayes with Stein Mixture Inference
Figure 2 for ELBOing Stein: Variational Bayes with Stein Mixture Inference
Figure 3 for ELBOing Stein: Variational Bayes with Stein Mixture Inference
Figure 4 for ELBOing Stein: Variational Bayes with Stein Mixture Inference
Viaarxiv icon

DefVerify: Do Hate Speech Models Reflect Their Dataset's Definition?

Add code
Oct 21, 2024
Figure 1 for DefVerify: Do Hate Speech Models Reflect Their Dataset's Definition?
Figure 2 for DefVerify: Do Hate Speech Models Reflect Their Dataset's Definition?
Figure 3 for DefVerify: Do Hate Speech Models Reflect Their Dataset's Definition?
Figure 4 for DefVerify: Do Hate Speech Models Reflect Their Dataset's Definition?
Viaarxiv icon

Lightning UQ Box: A Comprehensive Framework for Uncertainty Quantification in Deep Learning

Add code
Oct 04, 2024
Figure 1 for Lightning UQ Box: A Comprehensive Framework for Uncertainty Quantification in Deep Learning
Figure 2 for Lightning UQ Box: A Comprehensive Framework for Uncertainty Quantification in Deep Learning
Figure 3 for Lightning UQ Box: A Comprehensive Framework for Uncertainty Quantification in Deep Learning
Figure 4 for Lightning UQ Box: A Comprehensive Framework for Uncertainty Quantification in Deep Learning
Viaarxiv icon

Crowd-Calibrator: Can Annotator Disagreement Inform Calibration in Subjective Tasks?

Add code
Aug 26, 2024
Figure 1 for Crowd-Calibrator: Can Annotator Disagreement Inform Calibration in Subjective Tasks?
Figure 2 for Crowd-Calibrator: Can Annotator Disagreement Inform Calibration in Subjective Tasks?
Figure 3 for Crowd-Calibrator: Can Annotator Disagreement Inform Calibration in Subjective Tasks?
Figure 4 for Crowd-Calibrator: Can Annotator Disagreement Inform Calibration in Subjective Tasks?
Viaarxiv icon

Test-Time Adaptation with State-Space Models

Add code
Jul 17, 2024
Figure 1 for Test-Time Adaptation with State-Space Models
Figure 2 for Test-Time Adaptation with State-Space Models
Figure 3 for Test-Time Adaptation with State-Space Models
Figure 4 for Test-Time Adaptation with State-Space Models
Viaarxiv icon

Fast yet Safe: Early-Exiting with Risk Control

Add code
May 31, 2024
Figure 1 for Fast yet Safe: Early-Exiting with Risk Control
Figure 2 for Fast yet Safe: Early-Exiting with Risk Control
Figure 3 for Fast yet Safe: Early-Exiting with Risk Control
Figure 4 for Fast yet Safe: Early-Exiting with Risk Control
Viaarxiv icon