Picture for Jakob Heiss

Jakob Heiss

JUCAL: Jointly Calibrating Aleatoric and Epistemic Uncertainty in Classification Tasks

Add code
Feb 23, 2026
Viaarxiv icon

Nonparametric Filtering, Estimation and Classification using Neural Jump ODEs

Add code
Dec 04, 2024
Figure 1 for Nonparametric Filtering, Estimation and Classification using Neural Jump ODEs
Figure 2 for Nonparametric Filtering, Estimation and Classification using Neural Jump ODEs
Figure 3 for Nonparametric Filtering, Estimation and Classification using Neural Jump ODEs
Figure 4 for Nonparametric Filtering, Estimation and Classification using Neural Jump ODEs
Viaarxiv icon

Prices, Bids, Values: Everything, Everywhere, All at Once

Add code
Nov 14, 2024
Figure 1 for Prices, Bids, Values: Everything, Everywhere, All at Once
Figure 2 for Prices, Bids, Values: Everything, Everywhere, All at Once
Figure 3 for Prices, Bids, Values: Everything, Everywhere, All at Once
Figure 4 for Prices, Bids, Values: Everything, Everywhere, All at Once
Viaarxiv icon

Machine Learning-powered Combinatorial Clock Auction

Add code
Aug 20, 2023
Viaarxiv icon

Extending Path-Dependent NJ-ODEs to Noisy Observations and a Dependent Observation Framework

Add code
Jul 24, 2023
Figure 1 for Extending Path-Dependent NJ-ODEs to Noisy Observations and a Dependent Observation Framework
Figure 2 for Extending Path-Dependent NJ-ODEs to Noisy Observations and a Dependent Observation Framework
Viaarxiv icon

How (Implicit) Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part II: the Multi-D Case of Two Layers with Random First Layer

Add code
Mar 20, 2023
Figure 1 for How (Implicit) Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part II: the Multi-D Case of Two Layers with Random First Layer
Figure 2 for How (Implicit) Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part II: the Multi-D Case of Two Layers with Random First Layer
Figure 3 for How (Implicit) Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part II: the Multi-D Case of Two Layers with Random First Layer
Viaarxiv icon

Bayesian Optimization-based Combinatorial Assignment

Add code
Aug 31, 2022
Figure 1 for Bayesian Optimization-based Combinatorial Assignment
Figure 2 for Bayesian Optimization-based Combinatorial Assignment
Figure 3 for Bayesian Optimization-based Combinatorial Assignment
Figure 4 for Bayesian Optimization-based Combinatorial Assignment
Viaarxiv icon

Infinite width (finite depth) neural networks benefit from multi-task learning unlike shallow Gaussian Processes -- an exact quantitative macroscopic characterization

Add code
Jan 05, 2022
Figure 1 for Infinite width (finite depth) neural networks benefit from multi-task learning unlike shallow Gaussian Processes -- an exact quantitative macroscopic characterization
Viaarxiv icon

NOMU: Neural Optimization-based Model Uncertainty

Add code
Mar 03, 2021
Figure 1 for NOMU: Neural Optimization-based Model Uncertainty
Figure 2 for NOMU: Neural Optimization-based Model Uncertainty
Figure 3 for NOMU: Neural Optimization-based Model Uncertainty
Figure 4 for NOMU: Neural Optimization-based Model Uncertainty
Viaarxiv icon

How implicit regularization of Neural Networks affects the learned function -- Part I

Add code
Nov 07, 2019
Figure 1 for How implicit regularization of Neural Networks affects the learned function -- Part I
Figure 2 for How implicit regularization of Neural Networks affects the learned function -- Part I
Figure 3 for How implicit regularization of Neural Networks affects the learned function -- Part I
Figure 4 for How implicit regularization of Neural Networks affects the learned function -- Part I
Viaarxiv icon