Alert button
Picture for Hans-Peter Seidel

Hans-Peter Seidel

Alert button

Joint Sampling and Optimisation for Inverse Rendering

Sep 27, 2023
Martin Balint, Karol Myszkowski, Hans-Peter Seidel, Gurprit Singh

When dealing with difficult inverse problems such as inverse rendering, using Monte Carlo estimated gradients to optimise parameters can slow down convergence due to variance. Averaging many gradient samples in each iteration reduces this variance trivially. However, for problems that require thousands of optimisation iterations, the computational cost of this approach rises quickly. We derive a theoretical framework for interleaving sampling and optimisation. We update and reuse past samples with low-variance finite-difference estimators that describe the change in the estimated gradients between each iteration. By combining proportional and finite-difference samples, we continuously reduce the variance of our novel gradient meta-estimators throughout the optimisation process. We investigate how our estimator interlinks with Adam and derive a stable combination. We implement our method for inverse path tracing and demonstrate how our estimator speeds up convergence on difficult optimisation tasks.

* SIGGRAPH Asia 2023 Conference Papers 
Viaarxiv icon

Large-Batch, Neural Multi-Objective Bayesian Optimization

Jun 12, 2023
Navid Ansari, Hans-Peter Seidel, Vahid Babaei

Figure 1 for Large-Batch, Neural Multi-Objective Bayesian Optimization
Figure 2 for Large-Batch, Neural Multi-Objective Bayesian Optimization
Figure 3 for Large-Batch, Neural Multi-Objective Bayesian Optimization
Figure 4 for Large-Batch, Neural Multi-Objective Bayesian Optimization

Bayesian optimization provides a powerful framework for global optimization of black-box, expensive-to-evaluate functions. However, it has a limited capacity in handling data-intensive problems, especially in multi-objective settings, due to the poor scalability of default Gaussian Process surrogates. We present a novel Bayesian optimization framework specifically tailored to address these limitations. Our method leverages a Bayesian neural networks approach for surrogate modeling. This enables efficient handling of large batches of data, modeling complex problems, and generating the uncertainty of the predictions. In addition, our method incorporates a scalable, uncertainty-aware acquisition strategy based on the well-known, easy-to-deploy NSGA-II. This fully parallelizable strategy promotes efficient exploration of uncharted regions. Our framework allows for effective optimization in data-intensive environments with a minimum number of iterations. We demonstrate the superiority of our method by comparing it with state-of-the-art multi-objective optimizations. We perform our evaluation on two real-world problems - airfoil design and color printing - showcasing the applicability and efficiency of our approach. Code is available at: https://github.com/an-on-ym-ous/lbn_mobo

Viaarxiv icon

Enhancing image quality prediction with self-supervised visual masking

May 31, 2023
Uğur Çoğalan, Mojtaba Bemana, Hans-Peter Seidel, Karol Myszkowski

Figure 1 for Enhancing image quality prediction with self-supervised visual masking
Figure 2 for Enhancing image quality prediction with self-supervised visual masking
Figure 3 for Enhancing image quality prediction with self-supervised visual masking
Figure 4 for Enhancing image quality prediction with self-supervised visual masking

Full-reference image quality metrics (FR-IQMs) aim to measure the visual differences between a pair of reference and distorted images, with the goal of accurately predicting human judgments. However, existing FR-IQMs, including traditional ones like PSNR and SSIM and even perceptual ones such as HDR-VDP, LPIPS, and DISTS, still fall short in capturing the complexities and nuances of human perception. In this work, rather than devising a novel IQM model, we seek to improve upon the perceptual quality of existing FR-IQM methods. We achieve this by considering visual masking, an important characteristic of the human visual system that changes its sensitivity to distortions as a function of local image content. Specifically, for a given FR-IQM metric, we propose to predict a visual masking model that modulates reference and distorted images in a way that penalizes the visual errors based on their visibility. Since the ground truth visual masks are difficult to obtain, we demonstrate how they can be derived in a self-supervised manner solely based on mean opinion scores (MOS) collected from an FR-IQM dataset. Our approach results in enhanced FR-IQM metrics that are more in line with human prediction both visually and quantitatively.

* 11 pages, 11 figures 
Viaarxiv icon

Neural Field Convolutions by Repeated Differentiation

Apr 04, 2023
Ntumba Elie Nsampi, Adarsh Djeacoumar, Hans-Peter Seidel, Tobias Ritschel, Thomas Leimkühler

Figure 1 for Neural Field Convolutions by Repeated Differentiation
Figure 2 for Neural Field Convolutions by Repeated Differentiation
Figure 3 for Neural Field Convolutions by Repeated Differentiation
Figure 4 for Neural Field Convolutions by Repeated Differentiation

Neural fields are evolving towards a general-purpose continuous representation for visual computing. Yet, despite their numerous appealing properties, they are hardly amenable to signal processing. As a remedy, we present a method to perform general continuous convolutions with general continuous signals such as neural fields. Observing that piecewise polynomial kernels reduce to a sparse set of Dirac deltas after repeated differentiation, we leverage convolution identities and train a repeated integral field to efficiently execute large-scale convolutions. We demonstrate our approach on a variety of data modalities and spatially-varying kernels.

Viaarxiv icon

GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild

Nov 23, 2022
Chao Wang, Ana Serrano, Xingang Pan, Bin Chen, Hans-Peter Seidel, Christian Theobalt, Karol Myszkowski, Thomas Leimkuehler

Figure 1 for GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild
Figure 2 for GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild
Figure 3 for GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild
Figure 4 for GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild

Most in-the-wild images are stored in Low Dynamic Range (LDR) form, serving as a partial observation of the High Dynamic Range (HDR) visual world. Despite limited dynamic range, these LDR images are often captured with different exposures, implicitly containing information about the underlying HDR image distribution. Inspired by this intuition, in this work we present, to the best of our knowledge, the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner. The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images. The projection from HDR to LDR is achieved via a camera model that captures the stochasticity in exposure and camera response function. Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows, where previous supervised generative models produce overexposed images. We further demonstrate the new application of unsupervised inverse tone mapping (ITM) enabled by GlowGAN. Our ITM method does not need HDR images or paired multi-exposure images for training, yet it reconstructs more plausible information for overexposed regions than state-of-the-art supervised learning models trained on such data.

Viaarxiv icon

Autoinverse: Uncertainty Aware Inversion of Neural Networks

Aug 29, 2022
Navid Ansari, Hans-Peter Seidel, Nima Vahidi Ferdowsi, Vahid Babaei

Figure 1 for Autoinverse: Uncertainty Aware Inversion of Neural Networks
Figure 2 for Autoinverse: Uncertainty Aware Inversion of Neural Networks
Figure 3 for Autoinverse: Uncertainty Aware Inversion of Neural Networks
Figure 4 for Autoinverse: Uncertainty Aware Inversion of Neural Networks

Neural networks are powerful surrogates for numerous forward processes. The inversion of such surrogates is extremely valuable in science and engineering. The most important property of a successful neural inverse method is the performance of its solutions when deployed in the real world, i.e., on the native forward process (and not only the learned surrogate). We propose Autoinverse, a highly automated approach for inverting neural network surrogates. Our main insight is to seek inverse solutions in the vicinity of reliable data which have been sampled form the forward process and used for training the surrogate model. Autoinverse finds such solutions by taking into account the predictive uncertainty of the surrogate and minimizing it during the inversion. Apart from high accuracy, Autoinverse enforces the feasibility of solutions, comes with embedded regularization, and is initialization free. We verify our proposed method through addressing a set of real-world problems in control, fabrication, and design.

Viaarxiv icon

Video frame interpolation for high dynamic range sequences captured with dual-exposure sensors

Jun 19, 2022
Ugur Cogalan, Mojtaba Bemana, Hans-Peter Seidel, Karol Myszkowski

Figure 1 for Video frame interpolation for high dynamic range sequences captured with dual-exposure sensors
Figure 2 for Video frame interpolation for high dynamic range sequences captured with dual-exposure sensors
Figure 3 for Video frame interpolation for high dynamic range sequences captured with dual-exposure sensors
Figure 4 for Video frame interpolation for high dynamic range sequences captured with dual-exposure sensors

Video frame interpolation (VFI) enables many important applications that might involve the temporal domain, such as slow motion playback, or the spatial domain, such as stop motion sequences. We are focusing on the former task, where one of the key challenges is handling high dynamic range (HDR) scenes in the presence of complex motion. To this end, we explore possible advantages of dual-exposure sensors that readily provide sharp short and blurry long exposures that are spatially registered and whose ends are temporally aligned. This way, motion blur registers temporally continuous information on the scene motion that, combined with the sharp reference, enables more precise motion sampling within a single camera shot. We demonstrate that this facilitates a more complex motion reconstruction in the VFI task, as well as HDR frame reconstruction that so far has been considered only for the originally captured frames, not in-between interpolated frames. We design a neural network trained in these tasks that clearly outperforms existing solutions. We also propose a metric for scene motion complexity that provides important insights into the performance of VFI methods at the test time.

* 13 pages, 7 figures 
Viaarxiv icon

Physics Informed Neural Fields for Smoke Reconstruction with Sparse Data

Jun 14, 2022
Mengyu Chu, Lingjie Liu, Quan Zheng, Erik Franz, Hans-Peter Seidel, Christian Theobalt, Rhaleb Zayer

Figure 1 for Physics Informed Neural Fields for Smoke Reconstruction with Sparse Data
Figure 2 for Physics Informed Neural Fields for Smoke Reconstruction with Sparse Data
Figure 3 for Physics Informed Neural Fields for Smoke Reconstruction with Sparse Data
Figure 4 for Physics Informed Neural Fields for Smoke Reconstruction with Sparse Data

High-fidelity reconstruction of fluids from sparse multiview RGB videos remains a formidable challenge due to the complexity of the underlying physics as well as complex occlusion and lighting in captures. Existing solutions either assume knowledge of obstacles and lighting, or only focus on simple fluid scenes without obstacles or complex lighting, and thus are unsuitable for real-world scenes with unknown lighting or arbitrary obstacles. We present the first method to reconstruct dynamic fluid by leveraging the governing physics (ie, Navier -Stokes equations) in an end-to-end optimization from sparse videos without taking lighting conditions, geometry information, or boundary conditions as input. We provide a continuous spatio-temporal scene representation using neural networks as the ansatz of density and velocity solution functions for fluids as well as the radiance field for static objects. With a hybrid architecture that separates static and dynamic contents, fluid interactions with static obstacles are reconstructed for the first time without additional geometry input or human labeling. By augmenting time-varying neural radiance fields with physics-informed deep learning, our method benefits from the supervision of images and physical priors. To achieve robust optimization from sparse views, we introduced a layer-by-layer growing strategy to progressively increase the network capacity. Using progressively growing models with a new regularization term, we manage to disentangle density-color ambiguity in radiance fields without overfitting. A pretrained density-to-velocity fluid model is leveraged in addition as the data prior to avoid suboptimal velocity which underestimates vorticity but trivially fulfills physical equations. Our method exhibits high-quality results with relaxed constraints and strong flexibility on a representative set of synthetic and real flow captures.

* ACM Trans. Graph.41, 4 (2022), 119:1-119:14  
* accepted to ACM Transactions On Graphics (SIGGRAPH 2022), further info:\url{https://people.mpi-inf.mpg.de/~mchu/projects/PI-NeRF/} 
Viaarxiv icon

Eikonal Fields for Refractive Novel-View Synthesis

Feb 11, 2022
Mojtaba Bemana, Karol Myszkowski, Jeppe Revall Frisvad, Hans-Peter Seidel, Tobias Ritschel

Figure 1 for Eikonal Fields for Refractive Novel-View Synthesis
Figure 2 for Eikonal Fields for Refractive Novel-View Synthesis
Figure 3 for Eikonal Fields for Refractive Novel-View Synthesis
Figure 4 for Eikonal Fields for Refractive Novel-View Synthesis

We tackle the problem of generating novel-view images from collections of 2D images showing refractive and reflective objects. Current solutions assume opaque or transparent light transport along straight paths following the emission-absorption model. Instead, we optimize for a field of 3D-varying Index of Refraction (IoR) and trace light through it that bends toward the spatial gradients of said IoR according to the laws of eikonal light transport.

* 8 pages, 6 figures 
Viaarxiv icon