Alert button
Picture for Patrick Perez

Patrick Perez

Alert button

ToddlerDiffusion: Flash Interpretable Controllable Diffusion Model

Nov 24, 2023
Eslam Mohamed Bakr, Liangbing Zhao, Vincent Tao Hu, Matthieu Cord, Patrick Perez, Mohamed Elhoseiny

Diffusion-based generative models excel in perceptually impressive synthesis but face challenges in interpretability. This paper introduces ToddlerDiffusion, an interpretable 2D diffusion image-synthesis framework inspired by the human generation system. Unlike traditional diffusion models with opaque denoising steps, our approach decomposes the generation process into simpler, interpretable stages; generating contours, a palette, and a detailed colored image. This not only enhances overall performance but also enables robust editing and interaction capabilities. Each stage is meticulously formulated for efficiency and accuracy, surpassing Stable-Diffusion (LDM). Extensive experiments on datasets like LSUN-Churches and COCO validate our approach, consistently outperforming existing methods. ToddlerDiffusion achieves notable efficiency, matching LDM performance on LSUN-Churches while operating three times faster with a 3.76 times smaller architecture. Our source code is provided in the supplementary material and will be publicly accessible.

Viaarxiv icon

Teachers in concordance for pseudo-labeling of 3D sequential data

Jul 13, 2022
Awet Haileslassie Gebrehiwot, Patrik Vacek, David Hurych, Karel Zimmermann, Patrick Perez, Tomáš Svoboda

Figure 1 for Teachers in concordance for pseudo-labeling of 3D sequential data
Figure 2 for Teachers in concordance for pseudo-labeling of 3D sequential data
Figure 3 for Teachers in concordance for pseudo-labeling of 3D sequential data
Figure 4 for Teachers in concordance for pseudo-labeling of 3D sequential data

Automatic pseudo-labeling is a powerful tool to tap into large amounts of sequential unlabeled data. It is especially appealing in safety-critical applications of autonomous driving where performance requirements are extreme, datasets large, and manual labeling is very challenging. We propose to leverage the sequentiality of the captures to boost the pseudo-labeling technique in a teacher-student setup via training multiple teachers, each with access to different temporal information. This set of teachers, dubbed Concordance, provides higher quality pseudo-labels for the student training than standard methods. The output of multiple teachers is combined via a novel pseudo-label confidence-guided criterion. Our experimental evaluation focuses on the 3D point cloud domain in urban driving scenarios. We show the performance of our method applied to multiple model architectures with tasks of 3D semantic segmentation and 3D object detection on two benchmark datasets. Our method, using only 20% of manual labels, outperforms some of the fully supervised methods. Special performance boost is achieved for classes rarely appearing in the training data, e.g., bicycles and pedestrians. The implementation of our approach is publicly available at https://github.com/ctu-vras/T-Concord3D.

* This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible 
Viaarxiv icon

The Missing Data Encoder: Cross-Channel Image Completion\\with Hide-And-Seek Adversarial Network

May 06, 2019
Arnaud Dapogny, Matthieu Cord, Patrick Perez

Figure 1 for The Missing Data Encoder: Cross-Channel Image Completion\\with Hide-And-Seek Adversarial Network
Figure 2 for The Missing Data Encoder: Cross-Channel Image Completion\\with Hide-And-Seek Adversarial Network
Figure 3 for The Missing Data Encoder: Cross-Channel Image Completion\\with Hide-And-Seek Adversarial Network
Figure 4 for The Missing Data Encoder: Cross-Channel Image Completion\\with Hide-And-Seek Adversarial Network

Image completion is the problem of generating whole images from fragments only. It encompasses inpainting (generating a patch given its surrounding), reverse inpainting/extrapolation (generating the periphery given the central patch) as well as colorization (generating one or several channels given other ones). In this paper, we employ a deep network to perform image completion, with adversarial training as well as perceptual and completion losses, and call it the ``missing data encoder'' (MDE). We consider several configurations based on how the seed fragments are chosen. We show that training MDE for ``random extrapolation and colorization'' (MDE-REC), i.e. using random channel-independent fragments, allows a better capture of the image semantics and geometry. MDE training makes use of a novel ``hide-and-seek'' adversarial loss, where the discriminator seeks the original non-masked regions, while the generator tries to hide them. We validate our models both qualitatively and quantitatively on several datasets, showing their interest for image completion, unsupervised representation learning as well as face occlusion handling.

Viaarxiv icon

WoodScape: A multi-task, multi-camera fisheye dataset for autonomous driving

May 04, 2019
Senthil Yogamani, Ciaran Hughes, Jonathan Horgan, Ganesh Sistu, Padraig Varley, Derek O'Dea, Michal Uricar, Stefan Milz, Martin Simon, Karl Amende, Christian Witt, Hazem Rashed, Sumanth Chennupati, Sanjaya Nayak, Saquib Mansoor, Xavier Perroton, Patrick Perez

Figure 1 for WoodScape: A multi-task, multi-camera fisheye dataset for autonomous driving
Figure 2 for WoodScape: A multi-task, multi-camera fisheye dataset for autonomous driving
Figure 3 for WoodScape: A multi-task, multi-camera fisheye dataset for autonomous driving
Figure 4 for WoodScape: A multi-task, multi-camera fisheye dataset for autonomous driving

Fisheye cameras are commonly employed for obtaining a large field of view in surveillance, augmented reality and in particular automotive applications. In spite of its prevalence, there are few public datasets for detailed evaluation of computer vision algorithms on fisheye images. We release the first extensive fisheye automotive dataset, WoodScape, named after Robert Wood who invented the fisheye camera in 1906. WoodScape comprises of four surround view cameras and nine tasks including segmentation, depth estimation, 3D bounding box detection and soiling detection. Semantic annotation of 40 classes at the instance level is provided for over 10,000 images and annotation for other tasks are provided for over 100,000 images. We would like to encourage the community to adapt computer vision models for fisheye camera instead of naive rectification.

* The dataset and code for baseline experiments will be provided in stages upon publication of this paper 
Viaarxiv icon

Unsupervised Image Matching and Object Discovery as Optimization

Apr 05, 2019
Huy V. Vo, Francis Bach, Minsu Cho, Kai Han, Yann LeCun, Patrick Perez, Jean Ponce

Figure 1 for Unsupervised Image Matching and Object Discovery as Optimization
Figure 2 for Unsupervised Image Matching and Object Discovery as Optimization
Figure 3 for Unsupervised Image Matching and Object Discovery as Optimization
Figure 4 for Unsupervised Image Matching and Object Discovery as Optimization

Learning with complete or partial supervision is powerful but relies on ever-growing human annotation efforts. As a way to mitigate this serious problem, as well as to serve specific applications, unsupervised learning has emerged as an important field of research. In computer vision, unsupervised learning comes in various guises. We focus here on the unsupervised discovery and matching of object categories among images in a collection, following the work of Cho et al. 2015. We show that the original approach can be reformulated and solved as a proper optimization problem. Experiments on several benchmarks establish the merit of our approach.

* Accepted to CVPR 2019 
Viaarxiv icon

Exploring applications of deep reinforcement learning for real-world autonomous driving systems

Jan 16, 2019
Victor Talpaert, Ibrahim Sobh, B Ravi Kiran, Patrick Mannion, Senthil Yogamani, Ahmad El-Sallab, Patrick Perez

Figure 1 for Exploring applications of deep reinforcement learning for real-world autonomous driving systems
Figure 2 for Exploring applications of deep reinforcement learning for real-world autonomous driving systems

Deep Reinforcement Learning (DRL) has become increasingly powerful in recent years, with notable achievements such as Deepmind's AlphaGo. It has been successfully deployed in commercial vehicles like Mobileye's path planning system. However, a vast majority of work on DRL is focused on toy examples in controlled synthetic car simulator environments such as TORCS and CARLA. In general, DRL is still at its infancy in terms of usability in real-world applications. Our goal in this paper is to encourage real-world deployment of DRL in various autonomous driving (AD) applications. We first provide an overview of the tasks in autonomous driving systems, reinforcement learning algorithms and applications of DRL to AD systems. We then discuss the challenges which must be addressed to enable further progress towards real-world deployment.

* Accepted for Oral Presentation at VISAPP 2019 
Viaarxiv icon

Learning how to be robust: Deep polynomial regression

May 23, 2018
Juan-Manuel Perez-Rua, Tomas Crivelli, Patrick Bouthemy, Patrick Perez

Figure 1 for Learning how to be robust: Deep polynomial regression
Figure 2 for Learning how to be robust: Deep polynomial regression
Figure 3 for Learning how to be robust: Deep polynomial regression
Figure 4 for Learning how to be robust: Deep polynomial regression

Polynomial regression is a recurrent problem with a large number of applications. In computer vision it often appears in motion analysis. Whatever the application, standard methods for regression of polynomial models tend to deliver biased results when the input data is heavily contaminated by outliers. Moreover, the problem is even harder when outliers have strong structure. Departing from problem-tailored heuristics for robust estimation of parametric models, we explore deep convolutional neural networks. Our work aims to find a generic approach for training deep regression models without the explicit need of supervised annotation. We bypass the need for a tailored loss function on the regression parameters by attaching to our model a differentiable hard-wired decoder corresponding to the polynomial operation at hand. We demonstrate the value of our findings by comparing with standard robust regression methods. Furthermore, we demonstrate how to use such models for a real computer vision problem, i.e., video stabilization. The qualitative and quantitative experiments show that neural networks are able to learn robustness for general polynomial regression, with results that well overpass scores of traditional robust estimation methods.

* 18 pages, conference 
Viaarxiv icon

Structural inpainting

Mar 27, 2018
Huy V. Vo, Ngoc Q. K. Duong, Patrick Perez

Figure 1 for Structural inpainting
Figure 2 for Structural inpainting
Figure 3 for Structural inpainting
Figure 4 for Structural inpainting

Scene-agnostic visual inpainting remains very challenging despite progress in patch-based methods. Recently, Pathak et al. 2016 have introduced convolutional "context encoders" (CEs) for unsupervised feature learning through image completion tasks. With the additional help of adversarial training, CEs turned out to be a promising tool to complete complex structures in real inpainting problems. In the present paper we propose to push further this key ability by relying on perceptual reconstruction losses at training time. We show on a wide variety of visual scenes the merit of the approach for structural inpainting, and confirm it through a user study. Combined with the optimization-based refinement of Yang et al. 2016 with neural patches, our context encoder opens up new opportunities for prior-free visual inpainting.

Viaarxiv icon

Automatic Face Reenactment

Feb 08, 2016
Pablo Garrido, Levi Valgaerts, Ole Rehmsen, Thorsten Thormaehlen, Patrick Perez, Christian Theobalt

Figure 1 for Automatic Face Reenactment
Figure 2 for Automatic Face Reenactment
Figure 3 for Automatic Face Reenactment
Figure 4 for Automatic Face Reenactment

We propose an image-based, facial reenactment system that replaces the face of an actor in an existing target video with the face of a user from a source video, while preserving the original target performance. Our system is fully automatic and does not require a database of source expressions. Instead, it is able to produce convincing reenactment results from a short source video captured with an off-the-shelf camera, such as a webcam, where the user performs arbitrary facial gestures. Our reenactment pipeline is conceived as part image retrieval and part face transfer: The image retrieval is based on temporal clustering of target frames and a novel image matching metric that combines appearance and motion to select candidate frames from the source video, while the face transfer uses a 2D warping strategy that preserves the user's identity. Our system excels in simplicity as it does not rely on a 3D face model, it is robust under head motion and does not require the source and target performance to be similar. We show convincing reenactment results for videos that we recorded ourselves and for low-quality footage taken from the Internet.

* Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (8 pages) 
Viaarxiv icon

Hybrid multi-layer Deep CNN/Aggregator feature for image classification

Mar 13, 2015
Praveen Kulkarni, Joaquin Zepeda, Frederic Jurie, Patrick Perez, Louis Chevallier

Figure 1 for Hybrid multi-layer Deep CNN/Aggregator feature for image classification
Figure 2 for Hybrid multi-layer Deep CNN/Aggregator feature for image classification
Figure 3 for Hybrid multi-layer Deep CNN/Aggregator feature for image classification
Figure 4 for Hybrid multi-layer Deep CNN/Aggregator feature for image classification

Deep Convolutional Neural Networks (DCNN) have established a remarkable performance benchmark in the field of image classification, displacing classical approaches based on hand-tailored aggregations of local descriptors. Yet DCNNs impose high computational burdens both at training and at testing time, and training them requires collecting and annotating large amounts of training data. Supervised adaptation methods have been proposed in the literature that partially re-learn a transferred DCNN structure from a new target dataset. Yet these require expensive bounding-box annotations and are still computationally expensive to learn. In this paper, we address these shortcomings of DCNN adaptation schemes by proposing a hybrid approach that combines conventional, unsupervised aggregators such as Bag-of-Words (BoW), with the DCNN pipeline by treating the output of intermediate layers as densely extracted local descriptors. We test a variant of our approach that uses only intermediate DCNN layers on the standard PASCAL VOC 2007 dataset and show performance significantly higher than the standard BoW model and comparable to Fisher vector aggregation but with a feature that is 150 times smaller. A second variant of our approach that includes the fully connected DCNN layers significantly outperforms Fisher vector schemes and performs comparably to DCNN approaches adapted to Pascal VOC 2007, yet at only a small fraction of the training and testing cost.

* Accepted in ICASSP 2015 conference, 5 pages including reference, 4 figures and 2 tables 
Viaarxiv icon