Alert button
Picture for Hanna Wutte

Hanna Wutte

Alert button

Robust Utility Optimization via a GAN Approach

Add code
Bookmark button
Alert button
Mar 22, 2024
Florian Krach, Josef Teichmann, Hanna Wutte

Viaarxiv icon

How (Implicit) Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part II: the Multi-D Case of Two Layers with Random First Layer

Add code
Bookmark button
Alert button
Mar 20, 2023
Jakob Heiss, Josef Teichmann, Hanna Wutte

Figure 1 for How (Implicit) Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part II: the Multi-D Case of Two Layers with Random First Layer
Figure 2 for How (Implicit) Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part II: the Multi-D Case of Two Layers with Random First Layer
Figure 3 for How (Implicit) Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part II: the Multi-D Case of Two Layers with Random First Layer
Viaarxiv icon

Infinite width (finite depth) neural networks benefit from multi-task learning unlike shallow Gaussian Processes -- an exact quantitative macroscopic characterization

Add code
Bookmark button
Alert button
Jan 05, 2022
Jakob Heiss, Josef Teichmann, Hanna Wutte

Figure 1 for Infinite width (finite depth) neural networks benefit from multi-task learning unlike shallow Gaussian Processes -- an exact quantitative macroscopic characterization
Viaarxiv icon

Infinite wide (finite depth) Neural Networks benefit from multi-task learning unlike shallow Gaussian Processes -- an exact quantitative macroscopic characterization

Add code
Bookmark button
Alert button
Dec 31, 2021
Jakob Heiss, Josef Teichmann, Hanna Wutte

Figure 1 for Infinite wide (finite depth) Neural Networks benefit from multi-task learning unlike shallow Gaussian Processes -- an exact quantitative macroscopic characterization
Viaarxiv icon

NOMU: Neural Optimization-based Model Uncertainty

Add code
Bookmark button
Alert button
Mar 03, 2021
Jakob Heiss, Jakob Weissteiner, Hanna Wutte, Sven Seuken, Josef Teichmann

Figure 1 for NOMU: Neural Optimization-based Model Uncertainty
Figure 2 for NOMU: Neural Optimization-based Model Uncertainty
Figure 3 for NOMU: Neural Optimization-based Model Uncertainty
Figure 4 for NOMU: Neural Optimization-based Model Uncertainty
Viaarxiv icon

How implicit regularization of Neural Networks affects the learned function -- Part I

Add code
Bookmark button
Alert button
Nov 07, 2019
Jakob Heiss, Josef Teichmann, Hanna Wutte

Figure 1 for How implicit regularization of Neural Networks affects the learned function -- Part I
Figure 2 for How implicit regularization of Neural Networks affects the learned function -- Part I
Figure 3 for How implicit regularization of Neural Networks affects the learned function -- Part I
Figure 4 for How implicit regularization of Neural Networks affects the learned function -- Part I
Viaarxiv icon