Alert button
Picture for Luiz Velho

Luiz Velho

Alert button

Neural Implicit Morphing of Face Images

Aug 26, 2023
Guilherme Schardong, Tiago Novello, Daniel Perazzo, Hallison Paz, Iurii Medvedev, Luiz Velho, Nuno Gonçalves

Figure 1 for Neural Implicit Morphing of Face Images
Figure 2 for Neural Implicit Morphing of Face Images
Figure 3 for Neural Implicit Morphing of Face Images
Figure 4 for Neural Implicit Morphing of Face Images

Face morphing is one of the seminal problems in computer graphics, with numerous artistic and forensic applications. It is notoriously challenging due to pose, lighting, gender, and ethnicity variations. Generally, this task consists of a warping for feature alignment and a blending for a seamless transition between the warped images. We propose to leverage coordinate-based neural networks to represent such warpings and blendings of face images. During training, we exploit the smoothness and flexibility of such networks, by combining energy functionals employed in classical approaches without discretizations. Additionally, our method is time-dependent, allowing a continuous warping, and blending of the target images. During warping inference, we need both direct and inverse transformations of the time-dependent warping. The first is responsible for morphing the target image into the source image, while the inverse is used for morphing in the opposite direction. Our neural warping stores those maps in a single network due to its inversible property, dismissing the hard task of inverting them. The results of our experiments indicate that our method is competitive with both classical and data-based neural techniques under the lens of face-morphing detection approaches. Aesthetically, the resulting images present a seamless blending of diverse faces not yet usual in the literature.

* 17 pages, 11 figures 
Viaarxiv icon

Multiresolution Neural Networks for Imaging

Aug 27, 2022
Hallison Paz, Tiago Novello, Vinicius Silva, Luiz Schirmer, Guilherme Schardong, Fabio Chagas, Helio Lopes, Luiz Velho

Figure 1 for Multiresolution Neural Networks for Imaging
Figure 2 for Multiresolution Neural Networks for Imaging
Figure 3 for Multiresolution Neural Networks for Imaging
Figure 4 for Multiresolution Neural Networks for Imaging

We present MR-Net, a general architecture for multiresolution neural networks, and a framework for imaging applications based on this architecture. Our coordinate-based networks are continuous both in space and in scale as they are composed of multiple stages that progressively add finer details. Besides that, they are a compact and efficient representation. We show examples of multiresolution image representation and applications to texturemagnification, minification, and antialiasing. This document is the extended version of the paper [PNS+22]. It includes additional material that would not fit the page limitations of the conference track for publication.

Viaarxiv icon

Neural Implicit Surfaces in Higher Dimension

Jan 26, 2022
Tiago Novello, Vinicius da Silva, Helio Lopes, Guilherme Schardong, Luiz Schirmer, Luiz Velho

Figure 1 for Neural Implicit Surfaces in Higher Dimension
Figure 2 for Neural Implicit Surfaces in Higher Dimension
Figure 3 for Neural Implicit Surfaces in Higher Dimension
Figure 4 for Neural Implicit Surfaces in Higher Dimension

This work investigates the use of neural networks admitting high-order derivatives for modeling dynamic variations of smooth implicit surfaces. For this purpose, it extends the representation of differentiable neural implicit surfaces to higher dimensions, which opens up mechanisms that allow to exploit geometric transformations in many settings, from animation and surface evolution to shape morphing and design galleries. The problem is modeled by a $k$-parameter family of surfaces $S_c$, specified as a neural network function $f : \mathbb{R}^3 \times \mathbb{R}^k \rightarrow \mathbb{R}$, where $S_c$ is the zero-level set of the implicit function $f(\cdot, c) : \mathbb{R}^3 \rightarrow \mathbb{R} $, with $c \in \mathbb{R}^k$, with variations induced by the control variable $c$. In that context, restricted to each coordinate of $\mathbb{R}^k$, the underlying representation is a neural homotopy which is the solution of a general partial differential equation.

Viaarxiv icon

Differential Geometry in Neural Implicits

Jan 26, 2022
Tiago Novello, Vinicius da Silva, Helio Lopes, Guilherme Schardong, Luiz Schirmer, Luiz Velho

Figure 1 for Differential Geometry in Neural Implicits
Figure 2 for Differential Geometry in Neural Implicits
Figure 3 for Differential Geometry in Neural Implicits
Figure 4 for Differential Geometry in Neural Implicits

We introduce a neural implicit framework that bridges discrete differential geometry of triangle meshes and continuous differential geometry of neural implicit surfaces. It exploits the differentiable properties of neural networks and the discrete geometry of triangle meshes to approximate them as the zero-level sets of neural implicit functions. To train a neural implicit function, we propose a loss function that allows terms with high-order derivatives, such as the alignment between the principal directions, to learn more geometric details. During training, we consider a non-uniform sampling strategy based on the discrete curvatures of the triangle mesh to access points with more geometric details. This sampling implies faster learning while preserving geometric accuracy. We present the analytical differential geometry formulas for neural surfaces, such as normal vectors and curvatures. We use them to render the surfaces using sphere tracing. Additionally, we propose a network optimization based on singular value decomposition to reduce the number of parameters.

Viaarxiv icon

Can We Use Neural Regularization to Solve Depth Super-Resolution?

Dec 21, 2021
Milena Gazdieva, Oleg Voynov, Alexey Artemov, Youyi Zheng, Luiz Velho, Evgeny Burnaev

Figure 1 for Can We Use Neural Regularization to Solve Depth Super-Resolution?
Figure 2 for Can We Use Neural Regularization to Solve Depth Super-Resolution?
Figure 3 for Can We Use Neural Regularization to Solve Depth Super-Resolution?
Figure 4 for Can We Use Neural Regularization to Solve Depth Super-Resolution?

Depth maps captured with commodity sensors often require super-resolution to be used in applications. In this work we study a super-resolution approach based on a variational problem statement with Tikhonov regularization where the regularizer is parametrized with a deep neural network. This approach was previously applied successfully in photoacoustic tomography. We experimentally show that its application to depth map super-resolution is difficult, and provide suggestions about the reasons for that.

* 9 pages 
Viaarxiv icon

Deep Reinforcement Learning for High Level Character Control

May 20, 2020
Caio Souza, Luiz Velho

Figure 1 for Deep Reinforcement Learning for High Level Character Control
Figure 2 for Deep Reinforcement Learning for High Level Character Control
Figure 3 for Deep Reinforcement Learning for High Level Character Control
Figure 4 for Deep Reinforcement Learning for High Level Character Control

In this paper, we propose the use of traditional animations, heuristic behavior and reinforcement learning in the creation of intelligent characters for computational media. The traditional animation and heuristic gives artistic control over the behavior while the reinforcement learning adds generalization. The use case presented is a dog character with a high-level controller in a 3D environment which is built around the desired behaviors to be learned, such as fetching an item. As the development of the environment is the key for learning, further analysis is conducted of how to build those learning environments, the effects of environment and agent modeling choices, training procedures and generalization of the learned behavior. This analysis builds insight of the aforementioned factors and may serve as guide in the development of environments in general.

Viaarxiv icon

Latent-Space Laplacian Pyramids for Adversarial Representation Learning with 3D Point Clouds

Dec 13, 2019
Vage Egiazarian, Savva Ignatyev, Alexey Artemov, Oleg Voynov, Andrey Kravchenko, Youyi Zheng, Luiz Velho, Evgeny Burnaev

Figure 1 for Latent-Space Laplacian Pyramids for Adversarial Representation Learning with 3D Point Clouds
Figure 2 for Latent-Space Laplacian Pyramids for Adversarial Representation Learning with 3D Point Clouds
Figure 3 for Latent-Space Laplacian Pyramids for Adversarial Representation Learning with 3D Point Clouds
Figure 4 for Latent-Space Laplacian Pyramids for Adversarial Representation Learning with 3D Point Clouds

Constructing high-quality generative models for 3D shapes is a fundamental task in computer vision with diverse applications in geometry processing, engineering, and design. Despite the recent progress in deep generative modelling, synthesis of finely detailed 3D surfaces, such as high-resolution point clouds, from scratch has not been achieved with existing approaches. In this work, we propose to employ the latent-space Laplacian pyramid representation within a hierarchical generative model for 3D point clouds. We combine the recently proposed latent-space GAN and Laplacian GAN architectures to form a multi-scale model capable of generating 3D point clouds at increasing levels of detail. Our evaluation demonstrates that our model outperforms the existing generative models for 3D point clouds.

Viaarxiv icon