Alert button
Picture for Charlie Hewitt

Charlie Hewitt

Alert button

Procedural Humans for Computer Vision

Jan 03, 2023
Charlie Hewitt, Tadas Baltrušaitis, Erroll Wood, Lohit Petikam, Louis Florentin, Hanz Cuevas Velasquez

Figure 1 for Procedural Humans for Computer Vision
Figure 2 for Procedural Humans for Computer Vision
Figure 3 for Procedural Humans for Computer Vision
Figure 4 for Procedural Humans for Computer Vision

Recent work has shown the benefits of synthetic data for use in computer vision, with applications ranging from autonomous driving to face landmark detection and reconstruction. There are a number of benefits of using synthetic data from privacy preservation and bias elimination to quality and feasibility of annotation. Generating human-centered synthetic data is a particular challenge in terms of realism and domain-gap, though recent work has shown that effective machine learning models can be trained using synthetic face data alone. We show that this can be extended to include the full body by building on the pipeline of Wood et al. to generate synthetic images of humans in their entirety, with ground-truth annotations for computer vision applications. In this report we describe how we construct a parametric model of the face and body, including articulated hands; our rendering pipeline to generate realistic images of humans based on this body model; an approach for training DNNs to regress a dense set of landmarks covering the entire body; and a method for fitting our body model to dense landmarks predicted from multiple views.

Viaarxiv icon

DigiFace-1M: 1 Million Digital Face Images for Face Recognition

Oct 05, 2022
Gwangbin Bae, Martin de La Gorce, Tadas Baltrusaitis, Charlie Hewitt, Dong Chen, Julien Valentin, Roberto Cipolla, Jingjing Shen

Figure 1 for DigiFace-1M: 1 Million Digital Face Images for Face Recognition
Figure 2 for DigiFace-1M: 1 Million Digital Face Images for Face Recognition
Figure 3 for DigiFace-1M: 1 Million Digital Face Images for Face Recognition
Figure 4 for DigiFace-1M: 1 Million Digital Face Images for Face Recognition

State-of-the-art face recognition models show impressive accuracy, achieving over 99.8% on Labeled Faces in the Wild (LFW) dataset. Such models are trained on large-scale datasets that contain millions of real human face images collected from the internet. Web-crawled face images are severely biased (in terms of race, lighting, make-up, etc) and often contain label noise. More importantly, the face images are collected without explicit consent, raising ethical concerns. To avoid such problems, we introduce a large-scale synthetic dataset for face recognition, obtained by rendering digital faces using a computer graphics pipeline. We first demonstrate that aggressive data augmentation can significantly reduce the synthetic-to-real domain gap. Having full control over the rendering pipeline, we also study how each attribute (e.g., variation in facial pose, accessories and textures) affects the accuracy. Compared to SynFace, a recent method trained on GAN-generated synthetic faces, we reduce the error rate on LFW by 52.5% (accuracy from 91.93% to 96.17%). By fine-tuning the network on a smaller number of real face images that could reasonably be obtained with consent, we achieve accuracy that is comparable to the methods trained on millions of real face images.

* WACV 2023 
Viaarxiv icon

Mesh-Tension Driven Expression-Based Wrinkles for Synthetic Faces

Oct 05, 2022
Chirag Raman, Charlie Hewitt, Erroll Wood, Tadas Baltrusaitis

Figure 1 for Mesh-Tension Driven Expression-Based Wrinkles for Synthetic Faces
Figure 2 for Mesh-Tension Driven Expression-Based Wrinkles for Synthetic Faces
Figure 3 for Mesh-Tension Driven Expression-Based Wrinkles for Synthetic Faces
Figure 4 for Mesh-Tension Driven Expression-Based Wrinkles for Synthetic Faces

Recent advances in synthesizing realistic faces have shown that synthetic training data can replace real data for various face-related computer vision tasks. A question arises: how important is realism? Is the pursuit of photorealism excessive? In this work, we show otherwise. We boost the realism of our synthetic faces by introducing dynamic skin wrinkles in response to facial expressions and observe significant performance improvements in downstream computer vision tasks. Previous approaches for producing such wrinkles either required prohibitive artist effort to scale across identities and expressions or were not capable of reconstructing high-frequency skin details with sufficient fidelity. Our key contribution is an approach that produces realistic wrinkles across a large and diverse population of digital humans. Concretely, we formalize the concept of mesh-tension and use it to aggregate possible wrinkles from high-quality expression scans into albedo and displacement texture maps. At synthesis, we use these maps to produce wrinkles even for expressions not represented in the source scans. Additionally, to provide a more nuanced indicator of model performance under deformations resulting from compressed expressions, we introduce the 300W-winks evaluation subset and the Pexels dataset of closed eyes and winks.

* In Proceedings of the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 
Viaarxiv icon

3D face reconstruction with dense landmarks

Apr 06, 2022
Erroll Wood, Tadas Baltrusaitis, Charlie Hewitt, Matthew Johnson, Jingjing Shen, Nikola Milosavljevic, Daniel Wilde, Stephan Garbin, Toby Sharp, Ivan Stojiljkovic, Tom Cashman, Julien Valentin

Figure 1 for 3D face reconstruction with dense landmarks
Figure 2 for 3D face reconstruction with dense landmarks
Figure 3 for 3D face reconstruction with dense landmarks
Figure 4 for 3D face reconstruction with dense landmarks

Landmarks often play a key role in face analysis, but many aspects of identity or expression cannot be represented by sparse landmarks alone. Thus, in order to reconstruct faces more accurately, landmarks are often combined with additional signals like depth images or techniques like differentiable rendering. Can we keep things simple by just using more landmarks? In answer, we present the first method that accurately predicts 10x as many landmarks as usual, covering the whole head, including the eyes and teeth. This is accomplished using synthetic training data, which guarantees perfect landmark annotations. By fitting a morphable model to these dense landmarks, we achieve state-of-the-art results for monocular 3D face reconstruction in the wild. We show that dense landmarks are an ideal signal for integrating face shape information across frames by demonstrating accurate and expressive facial performance capture in both monocular and multi-view scenarios. This approach is also highly efficient: we can predict dense landmarks and fit our 3D face model at over 150FPS on a single CPU thread.

Viaarxiv icon

Fake It Till You Make It: Face analysis in the wild using synthetic data alone

Oct 05, 2021
Erroll Wood, Tadas Baltrušaitis, Charlie Hewitt, Sebastian Dziadzio, Matthew Johnson, Virginia Estellers, Thomas J. Cashman, Jamie Shotton

Figure 1 for Fake It Till You Make It: Face analysis in the wild using synthetic data alone
Figure 2 for Fake It Till You Make It: Face analysis in the wild using synthetic data alone
Figure 3 for Fake It Till You Make It: Face analysis in the wild using synthetic data alone
Figure 4 for Fake It Till You Make It: Face analysis in the wild using synthetic data alone

We demonstrate that it is possible to perform face-related computer vision in the wild using synthetic data alone. The community has long enjoyed the benefits of synthesizing training data with graphics, but the domain gap between real and synthetic data has remained a problem, especially for human faces. Researchers have tried to bridge this gap with data mixing, domain adaptation, and domain-adversarial training, but we show that it is possible to synthesize data with minimal domain gap, so that models trained on synthetic data generalize to real in-the-wild datasets. We describe how to combine a procedurally-generated parametric 3D face model with a comprehensive library of hand-crafted assets to render training images with unprecedented realism and diversity. We train machine learning systems for face-related tasks such as landmark localization and face parsing, showing that synthetic data can both match real data in accuracy as well as open up new approaches where manual labelling would be impossible.

* ICCV 2021. Amended acknowledgements 
Viaarxiv icon

A high fidelity synthetic face framework for computer vision

Jul 16, 2020
Tadas Baltrusaitis, Erroll Wood, Virginia Estellers, Charlie Hewitt, Sebastian Dziadzio, Marek Kowalski, Matthew Johnson, Thomas J. Cashman, Jamie Shotton

Figure 1 for A high fidelity synthetic face framework for computer vision
Figure 2 for A high fidelity synthetic face framework for computer vision
Figure 3 for A high fidelity synthetic face framework for computer vision
Figure 4 for A high fidelity synthetic face framework for computer vision

Analysis of faces is one of the core applications of computer vision, with tasks ranging from landmark alignment, head pose estimation, expression recognition, and face recognition among others. However, building reliable methods requires time-consuming data collection and often even more time-consuming manual annotation, which can be unreliable. In our work we propose synthesizing such facial data, including ground truth annotations that would be almost impossible to acquire through manual annotation at the consistency and scale possible through use of synthetic data. We use a parametric face model together with hand crafted assets which enable us to generate training data with unprecedented quality and diversity (varying shape, texture, expression, pose, lighting, and hair).

Viaarxiv icon

Shape-only Features for Plant Leaf Identification

Nov 20, 2018
Charlie Hewitt, Marwa Mahmoud

Figure 1 for Shape-only Features for Plant Leaf Identification
Figure 2 for Shape-only Features for Plant Leaf Identification
Figure 3 for Shape-only Features for Plant Leaf Identification
Figure 4 for Shape-only Features for Plant Leaf Identification

This paper presents a novel feature set for shape-only leaf identification motivated by real-world, mobile deployment. The feature set includes basic shape features, as well as signal features extracted from local area integral invariants (LAIIs), similar to curvature maps, at multiple scales. The proposed methodology is evaluated on a number of publicly available leaf datasets with comparable results to existing methods which make use of colour and texture features in addition to shape. Over 90% classification accuracy is achieved on most datasets, with top-four accuracy for these datasets reaching over 98%. Rotation and scale invariance of the proposed features are demonstrated, along with an evaluation of the generalisability of the approach for generic shape matching.

Viaarxiv icon