Alert button
Picture for Maarten V. de Hoop

Maarten V. de Hoop

Alert button

Implicit Neural Representations and the Algebra of Complex Wavelets

Add code
Bookmark button
Alert button
Oct 01, 2023
T. Mitchell Roddenberry, Vishwanath Saragadam, Maarten V. de Hoop, Richard G. Baraniuk

Figure 1 for Implicit Neural Representations and the Algebra of Complex Wavelets
Figure 2 for Implicit Neural Representations and the Algebra of Complex Wavelets
Figure 3 for Implicit Neural Representations and the Algebra of Complex Wavelets
Figure 4 for Implicit Neural Representations and the Algebra of Complex Wavelets
Viaarxiv icon

Harpa: High-Rate Phase Association with Travel Time Neural Fields

Add code
Bookmark button
Alert button
Jul 14, 2023
Cheng Shi, Maarten V. de Hoop, Ivan Dokmanić

Figure 1 for Harpa: High-Rate Phase Association with Travel Time Neural Fields
Figure 2 for Harpa: High-Rate Phase Association with Travel Time Neural Fields
Figure 3 for Harpa: High-Rate Phase Association with Travel Time Neural Fields
Figure 4 for Harpa: High-Rate Phase Association with Travel Time Neural Fields
Viaarxiv icon

Globally injective and bijective neural operators

Add code
Bookmark button
Alert button
Jun 06, 2023
Takashi Furuya, Michael Puthawala, Matti Lassas, Maarten V. de Hoop

Viaarxiv icon

Conditional score-based diffusion models for Bayesian inference in infinite dimensions

Add code
Bookmark button
Alert button
May 28, 2023
Lorenzo Baldassari, Ali Siahkoohi, Josselin Garnier, Knut Solna, Maarten V. de Hoop

Figure 1 for Conditional score-based diffusion models for Bayesian inference in infinite dimensions
Viaarxiv icon

Martian time-series unraveled: A multi-scale nested approach with factorial variational autoencoders

Add code
Bookmark button
Alert button
May 25, 2023
Ali Siahkoohi, Rudy Morel, Randall Balestriero, Erwan Allys, Grégory Sainton, Taichi Kawamura, Maarten V. de Hoop

Figure 1 for Martian time-series unraveled: A multi-scale nested approach with factorial variational autoencoders
Figure 2 for Martian time-series unraveled: A multi-scale nested approach with factorial variational autoencoders
Figure 3 for Martian time-series unraveled: A multi-scale nested approach with factorial variational autoencoders
Figure 4 for Martian time-series unraveled: A multi-scale nested approach with factorial variational autoencoders
Viaarxiv icon

A Transfer Principle: Universal Approximators Between Metric Spaces From Euclidean Universal Approximators

Add code
Bookmark button
Alert button
Apr 24, 2023
Anastasis Kratsios, Chong Liu, Matti Lassas, Maarten V. de Hoop, Ivan Dokmanić

Figure 1 for A Transfer Principle: Universal Approximators Between Metric Spaces From Euclidean Universal Approximators
Figure 2 for A Transfer Principle: Universal Approximators Between Metric Spaces From Euclidean Universal Approximators
Figure 3 for A Transfer Principle: Universal Approximators Between Metric Spaces From Euclidean Universal Approximators
Figure 4 for A Transfer Principle: Universal Approximators Between Metric Spaces From Euclidean Universal Approximators
Viaarxiv icon

Unearthing InSights into Mars: unsupervised source separation with limited data

Add code
Bookmark button
Alert button
Jan 27, 2023
Ali Siahkoohi, Rudy Morel, Maarten V. de Hoop, Erwan Allys, Grégory Sainton, Taichi Kawamura

Figure 1 for Unearthing InSights into Mars: unsupervised source separation with limited data
Figure 2 for Unearthing InSights into Mars: unsupervised source separation with limited data
Figure 3 for Unearthing InSights into Mars: unsupervised source separation with limited data
Figure 4 for Unearthing InSights into Mars: unsupervised source separation with limited data
Viaarxiv icon

Fine-tuning Neural-Operator architectures for training and generalization

Add code
Bookmark button
Alert button
Jan 27, 2023
Jose Antonio Lara Benitez, Takashi Furuya, Florian Faucher, Xavier Tricoche, Maarten V. de Hoop

Figure 1 for Fine-tuning Neural-Operator architectures for training and generalization
Figure 2 for Fine-tuning Neural-Operator architectures for training and generalization
Figure 3 for Fine-tuning Neural-Operator architectures for training and generalization
Figure 4 for Fine-tuning Neural-Operator architectures for training and generalization
Viaarxiv icon

Convergence Rates for Learning Linear Operators from Noisy Data

Add code
Bookmark button
Alert button
Aug 27, 2021
Maarten V. de Hoop, Nikola B. Kovachki, Nicholas H. Nelsen, Andrew M. Stuart

Figure 1 for Convergence Rates for Learning Linear Operators from Noisy Data
Figure 2 for Convergence Rates for Learning Linear Operators from Noisy Data
Figure 3 for Convergence Rates for Learning Linear Operators from Noisy Data
Figure 4 for Convergence Rates for Learning Linear Operators from Noisy Data
Viaarxiv icon

Deep learning architectures for nonlinear operator functions and nonlinear inverse problems

Add code
Bookmark button
Alert button
Jan 23, 2020
Maarten V. de Hoop, Matti Lassas, Christopher A. Wong

Figure 1 for Deep learning architectures for nonlinear operator functions and nonlinear inverse problems
Viaarxiv icon