Alert button
Picture for Tiago Novello

Tiago Novello

Alert button

Neural Implicit Morphing of Face Images

Aug 26, 2023
Guilherme Schardong, Tiago Novello, Daniel Perazzo, Hallison Paz, Iurii Medvedev, Luiz Velho, Nuno Gonçalves

Figure 1 for Neural Implicit Morphing of Face Images
Figure 2 for Neural Implicit Morphing of Face Images
Figure 3 for Neural Implicit Morphing of Face Images
Figure 4 for Neural Implicit Morphing of Face Images

Face morphing is one of the seminal problems in computer graphics, with numerous artistic and forensic applications. It is notoriously challenging due to pose, lighting, gender, and ethnicity variations. Generally, this task consists of a warping for feature alignment and a blending for a seamless transition between the warped images. We propose to leverage coordinate-based neural networks to represent such warpings and blendings of face images. During training, we exploit the smoothness and flexibility of such networks, by combining energy functionals employed in classical approaches without discretizations. Additionally, our method is time-dependent, allowing a continuous warping, and blending of the target images. During warping inference, we need both direct and inverse transformations of the time-dependent warping. The first is responsible for morphing the target image into the source image, while the inverse is used for morphing in the opposite direction. Our neural warping stores those maps in a single network due to its inversible property, dismissing the hard task of inverting them. The results of our experiments indicate that our method is competitive with both classical and data-based neural techniques under the lens of face-morphing detection approaches. Aesthetically, the resulting images present a seamless blending of diverse faces not yet usual in the literature.

* 17 pages, 11 figures 
Viaarxiv icon

Understanding Sinusoidal Neural Networks

Dec 04, 2022
Tiago Novello

Figure 1 for Understanding Sinusoidal Neural Networks
Figure 2 for Understanding Sinusoidal Neural Networks

In this work, we investigate the representation capacity of multilayer perceptron networks that use the sine as activation function - sinusoidal neural networks. We show that the layer composition in such networks compacts information. For this, we prove that the composition of sinusoidal layers expands as a sum of sines consisting of a large number of new frequencies given by linear combinations of the weights of the network's first layer. We provide the expression of the corresponding amplitudes in terms of the Bessel functions and give an upper bound for them that can be used to control the resulting approximation.

Viaarxiv icon

Multiresolution Neural Networks for Imaging

Aug 27, 2022
Hallison Paz, Tiago Novello, Vinicius Silva, Luiz Schirmer, Guilherme Schardong, Fabio Chagas, Helio Lopes, Luiz Velho

Figure 1 for Multiresolution Neural Networks for Imaging
Figure 2 for Multiresolution Neural Networks for Imaging
Figure 3 for Multiresolution Neural Networks for Imaging
Figure 4 for Multiresolution Neural Networks for Imaging

We present MR-Net, a general architecture for multiresolution neural networks, and a framework for imaging applications based on this architecture. Our coordinate-based networks are continuous both in space and in scale as they are composed of multiple stages that progressively add finer details. Besides that, they are a compact and efficient representation. We show examples of multiresolution image representation and applications to texturemagnification, minification, and antialiasing. This document is the extended version of the paper [PNS+22]. It includes additional material that would not fit the page limitations of the conference track for publication.

Viaarxiv icon

Neural Implicit Surfaces in Higher Dimension

Jan 26, 2022
Tiago Novello, Vinicius da Silva, Helio Lopes, Guilherme Schardong, Luiz Schirmer, Luiz Velho

Figure 1 for Neural Implicit Surfaces in Higher Dimension
Figure 2 for Neural Implicit Surfaces in Higher Dimension
Figure 3 for Neural Implicit Surfaces in Higher Dimension
Figure 4 for Neural Implicit Surfaces in Higher Dimension

This work investigates the use of neural networks admitting high-order derivatives for modeling dynamic variations of smooth implicit surfaces. For this purpose, it extends the representation of differentiable neural implicit surfaces to higher dimensions, which opens up mechanisms that allow to exploit geometric transformations in many settings, from animation and surface evolution to shape morphing and design galleries. The problem is modeled by a $k$-parameter family of surfaces $S_c$, specified as a neural network function $f : \mathbb{R}^3 \times \mathbb{R}^k \rightarrow \mathbb{R}$, where $S_c$ is the zero-level set of the implicit function $f(\cdot, c) : \mathbb{R}^3 \rightarrow \mathbb{R} $, with $c \in \mathbb{R}^k$, with variations induced by the control variable $c$. In that context, restricted to each coordinate of $\mathbb{R}^k$, the underlying representation is a neural homotopy which is the solution of a general partial differential equation.

Viaarxiv icon

Differential Geometry in Neural Implicits

Jan 26, 2022
Tiago Novello, Vinicius da Silva, Helio Lopes, Guilherme Schardong, Luiz Schirmer, Luiz Velho

Figure 1 for Differential Geometry in Neural Implicits
Figure 2 for Differential Geometry in Neural Implicits
Figure 3 for Differential Geometry in Neural Implicits
Figure 4 for Differential Geometry in Neural Implicits

We introduce a neural implicit framework that bridges discrete differential geometry of triangle meshes and continuous differential geometry of neural implicit surfaces. It exploits the differentiable properties of neural networks and the discrete geometry of triangle meshes to approximate them as the zero-level sets of neural implicit functions. To train a neural implicit function, we propose a loss function that allows terms with high-order derivatives, such as the alignment between the principal directions, to learn more geometric details. During training, we consider a non-uniform sampling strategy based on the discrete curvatures of the triangle mesh to access points with more geometric details. This sampling implies faster learning while preserving geometric accuracy. We present the analytical differential geometry formulas for neural surfaces, such as normal vectors and curvatures. We use them to render the surfaces using sphere tracing. Additionally, we propose a network optimization based on singular value decomposition to reduce the number of parameters.

Viaarxiv icon