Alert button
Picture for Eric Nalisnick

Eric Nalisnick

Alert button

Exploiting Inferential Structure in Neural Processes

Add code
Bookmark button
Alert button
Jun 27, 2023
Dharmesh Tailor, Mohammad Emtiyaz Khan, Eric Nalisnick

Figure 1 for Exploiting Inferential Structure in Neural Processes
Figure 2 for Exploiting Inferential Structure in Neural Processes
Viaarxiv icon

Towards Anytime Classification in Early-Exit Architectures by Enforcing Conditional Monotonicity

Add code
Bookmark button
Alert button
Jun 05, 2023
Metod Jazbec, James Urquhart Allingham, Dan Zhang, Eric Nalisnick

Figure 1 for Towards Anytime Classification in Early-Exit Architectures by Enforcing Conditional Monotonicity
Figure 2 for Towards Anytime Classification in Early-Exit Architectures by Enforcing Conditional Monotonicity
Figure 3 for Towards Anytime Classification in Early-Exit Architectures by Enforcing Conditional Monotonicity
Figure 4 for Towards Anytime Classification in Early-Exit Architectures by Enforcing Conditional Monotonicity
Viaarxiv icon

Do Bayesian Neural Networks Need To Be Fully Stochastic?

Add code
Bookmark button
Alert button
Nov 11, 2022
Mrinank Sharma, Sebastian Farquhar, Eric Nalisnick, Tom Rainforth

Figure 1 for Do Bayesian Neural Networks Need To Be Fully Stochastic?
Figure 2 for Do Bayesian Neural Networks Need To Be Fully Stochastic?
Figure 3 for Do Bayesian Neural Networks Need To Be Fully Stochastic?
Figure 4 for Do Bayesian Neural Networks Need To Be Fully Stochastic?
Viaarxiv icon

Learning to Defer to Multiple Experts: Consistent Surrogate Losses, Confidence Calibration, and Conformal Ensembles

Add code
Bookmark button
Alert button
Oct 30, 2022
Rajeev Verma, Daniel Barrejón, Eric Nalisnick

Figure 1 for Learning to Defer to Multiple Experts: Consistent Surrogate Losses, Confidence Calibration, and Conformal Ensembles
Figure 2 for Learning to Defer to Multiple Experts: Consistent Surrogate Losses, Confidence Calibration, and Conformal Ensembles
Figure 3 for Learning to Defer to Multiple Experts: Consistent Surrogate Losses, Confidence Calibration, and Conformal Ensembles
Figure 4 for Learning to Defer to Multiple Experts: Consistent Surrogate Losses, Confidence Calibration, and Conformal Ensembles
Viaarxiv icon

Sampling-based inference for large linear models, with application to linearised Laplace

Add code
Bookmark button
Alert button
Oct 10, 2022
Javier Antorán, Shreyas Padhy, Riccardo Barbano, Eric Nalisnick, David Janz, José Miguel Hernández-Lobato

Figure 1 for Sampling-based inference for large linear models, with application to linearised Laplace
Figure 2 for Sampling-based inference for large linear models, with application to linearised Laplace
Figure 3 for Sampling-based inference for large linear models, with application to linearised Laplace
Figure 4 for Sampling-based inference for large linear models, with application to linearised Laplace
Viaarxiv icon

Hate Speech Criteria: A Modular Approach to Task-Specific Hate Speech Definitions

Add code
Bookmark button
Alert button
Jun 30, 2022
Urja Khurana, Ivar Vermeulen, Eric Nalisnick, Marloes van Noorloos, Antske Fokkens

Figure 1 for Hate Speech Criteria: A Modular Approach to Task-Specific Hate Speech Definitions
Figure 2 for Hate Speech Criteria: A Modular Approach to Task-Specific Hate Speech Definitions
Figure 3 for Hate Speech Criteria: A Modular Approach to Task-Specific Hate Speech Definitions
Viaarxiv icon

Adapting the Linearised Laplace Model Evidence for Modern Deep Learning

Add code
Bookmark button
Alert button
Jun 17, 2022
Javier Antorán, David Janz, James Urquhart Allingham, Erik Daxberger, Riccardo Barbano, Eric Nalisnick, José Miguel Hernández-Lobato

Figure 1 for Adapting the Linearised Laplace Model Evidence for Modern Deep Learning
Figure 2 for Adapting the Linearised Laplace Model Evidence for Modern Deep Learning
Figure 3 for Adapting the Linearised Laplace Model Evidence for Modern Deep Learning
Figure 4 for Adapting the Linearised Laplace Model Evidence for Modern Deep Learning
Viaarxiv icon

Adversarial Defense via Image Denoising with Chaotic Encryption

Add code
Bookmark button
Alert button
Mar 19, 2022
Shi Hu, Eric Nalisnick, Max Welling

Figure 1 for Adversarial Defense via Image Denoising with Chaotic Encryption
Figure 2 for Adversarial Defense via Image Denoising with Chaotic Encryption
Figure 3 for Adversarial Defense via Image Denoising with Chaotic Encryption
Figure 4 for Adversarial Defense via Image Denoising with Chaotic Encryption
Viaarxiv icon

Calibrated Learning to Defer with One-vs-All Classifiers

Add code
Bookmark button
Alert button
Feb 08, 2022
Rajeev Verma, Eric Nalisnick

Figure 1 for Calibrated Learning to Defer with One-vs-All Classifiers
Figure 2 for Calibrated Learning to Defer with One-vs-All Classifiers
Figure 3 for Calibrated Learning to Defer with One-vs-All Classifiers
Figure 4 for Calibrated Learning to Defer with One-vs-All Classifiers
Viaarxiv icon

How Emotionally Stable is ALBERT? Testing Robustness with Stochastic Weight Averaging on a Sentiment Analysis Task

Add code
Bookmark button
Alert button
Nov 18, 2021
Urja Khurana, Eric Nalisnick, Antske Fokkens

Figure 1 for How Emotionally Stable is ALBERT? Testing Robustness with Stochastic Weight Averaging on a Sentiment Analysis Task
Figure 2 for How Emotionally Stable is ALBERT? Testing Robustness with Stochastic Weight Averaging on a Sentiment Analysis Task
Figure 3 for How Emotionally Stable is ALBERT? Testing Robustness with Stochastic Weight Averaging on a Sentiment Analysis Task
Figure 4 for How Emotionally Stable is ALBERT? Testing Robustness with Stochastic Weight Averaging on a Sentiment Analysis Task
Viaarxiv icon