Alert button
Picture for Ingo Steinwart

Ingo Steinwart

Alert button

Conditioning of Banach Space Valued Gaussian Random Variables: An Approximation Approach Based on Martingales

Add code
Bookmark button
Alert button
Apr 04, 2024
Ingo Steinwart

Viaarxiv icon

Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension

Add code
Bookmark button
Alert button
May 23, 2023
Moritz Haas, David Holzmüller, Ulrike von Luxburg, Ingo Steinwart

Figure 1 for Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension
Figure 2 for Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension
Figure 3 for Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension
Viaarxiv icon

Physics-Informed Gaussian Process Regression Generalizes Linear PDE Solvers

Add code
Bookmark button
Alert button
Dec 23, 2022
Marvin Pförtner, Ingo Steinwart, Philipp Hennig, Jonathan Wenger

Figure 1 for Physics-Informed Gaussian Process Regression Generalizes Linear PDE Solvers
Figure 2 for Physics-Informed Gaussian Process Regression Generalizes Linear PDE Solvers
Figure 3 for Physics-Informed Gaussian Process Regression Generalizes Linear PDE Solvers
Figure 4 for Physics-Informed Gaussian Process Regression Generalizes Linear PDE Solvers
Viaarxiv icon

Utilizing Expert Features for Contrastive Learning of Time-Series Representations

Add code
Bookmark button
Alert button
Jun 23, 2022
Manuel Nonnenmacher, Lukas Oldenburg, Ingo Steinwart, David Reeb

Figure 1 for Utilizing Expert Features for Contrastive Learning of Time-Series Representations
Figure 2 for Utilizing Expert Features for Contrastive Learning of Time-Series Representations
Figure 3 for Utilizing Expert Features for Contrastive Learning of Time-Series Representations
Figure 4 for Utilizing Expert Features for Contrastive Learning of Time-Series Representations
Viaarxiv icon

A Framework and Benchmark for Deep Batch Active Learning for Regression

Add code
Bookmark button
Alert button
Mar 17, 2022
David Holzmüller, Viktor Zaverkin, Johannes Kästner, Ingo Steinwart

Figure 1 for A Framework and Benchmark for Deep Batch Active Learning for Regression
Figure 2 for A Framework and Benchmark for Deep Batch Active Learning for Regression
Figure 3 for A Framework and Benchmark for Deep Batch Active Learning for Regression
Figure 4 for A Framework and Benchmark for Deep Batch Active Learning for Regression
Viaarxiv icon

SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning

Add code
Bookmark button
Alert button
Oct 19, 2021
Manuel Nonnenmacher, Thomas Pfeil, Ingo Steinwart, David Reeb

Figure 1 for SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning
Figure 2 for SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning
Figure 3 for SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning
Figure 4 for SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning
Viaarxiv icon

Fast and Sample-Efficient Interatomic Neural Network Potentials for Molecules and Materials Based on Gaussian Moments

Add code
Bookmark button
Alert button
Sep 20, 2021
Viktor Zaverkin, David Holzmüller, Ingo Steinwart, Johannes Kästner

Figure 1 for Fast and Sample-Efficient Interatomic Neural Network Potentials for Molecules and Materials Based on Gaussian Moments
Figure 2 for Fast and Sample-Efficient Interatomic Neural Network Potentials for Molecules and Materials Based on Gaussian Moments
Figure 3 for Fast and Sample-Efficient Interatomic Neural Network Potentials for Molecules and Materials Based on Gaussian Moments
Figure 4 for Fast and Sample-Efficient Interatomic Neural Network Potentials for Molecules and Materials Based on Gaussian Moments
Viaarxiv icon

Which Minimizer Does My Neural Network Converge To?

Add code
Bookmark button
Alert button
Nov 04, 2020
Manuel Nonnenmacher, David Reeb, Ingo Steinwart

Figure 1 for Which Minimizer Does My Neural Network Converge To?
Figure 2 for Which Minimizer Does My Neural Network Converge To?
Figure 3 for Which Minimizer Does My Neural Network Converge To?
Figure 4 for Which Minimizer Does My Neural Network Converge To?
Viaarxiv icon

Reproducing Kernel Hilbert Spaces Cannot Contain all Continuous Functions on a Compact Metric Space

Add code
Bookmark button
Alert button
Mar 13, 2020
Ingo Steinwart

Viaarxiv icon

Training Two-Layer ReLU Networks with Gradient Descent is Inconsistent

Add code
Bookmark button
Alert button
Feb 12, 2020
David Holzmüller, Ingo Steinwart

Figure 1 for Training Two-Layer ReLU Networks with Gradient Descent is Inconsistent
Figure 2 for Training Two-Layer ReLU Networks with Gradient Descent is Inconsistent
Figure 3 for Training Two-Layer ReLU Networks with Gradient Descent is Inconsistent
Figure 4 for Training Two-Layer ReLU Networks with Gradient Descent is Inconsistent
Viaarxiv icon