Alert button
Picture for Yohan Dupuis

Yohan Dupuis

Alert button

SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving

Mar 09, 2022
Ahmed Rida Sekkat, Yohan Dupuis, Varun Ravi Kumar, Hazem Rashed, Senthil Yogamani, Pascal Vasseur, Paul Honeine

Figure 1 for SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
Figure 2 for SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
Figure 3 for SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
Figure 4 for SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving

Surround-view cameras are a primary sensor for automated driving, used for near field perception. It is one of the most commonly used sensors in commercial vehicles. Four fisheye cameras with a 190{\deg} field of view cover the 360{\deg} around the vehicle. Due to its high radial distortion, the standard algorithms do not extend easily. Previously, we released the first public fisheye surround-view dataset named WoodScape. In this work, we release a synthetic version of the surround-view dataset, covering many of its weaknesses and extending it. Firstly, it is not possible to obtain ground truth for pixel-wise optical flow and depth. Secondly, WoodScape did not have all four cameras simultaneously in order to sample diverse frames. However, this means that multi-camera algorithms cannot be designed, which is enabled in the new dataset. We implemented surround-view fisheye geometric projections in CARLA Simulator matching WoodScape's configuration and created SynWoodScape. We release 80k images from the synthetic dataset with annotations for 10+ tasks. We also release the baseline code and supporting scripts.

Viaarxiv icon

Efficient LiDAR data compression for embedded V2I or V2V data handling

Apr 11, 2019
Paul Caillet, Yohan Dupuis

Figure 1 for Efficient LiDAR data compression for embedded V2I or V2V data handling
Figure 2 for Efficient LiDAR data compression for embedded V2I or V2V data handling
Figure 3 for Efficient LiDAR data compression for embedded V2I or V2V data handling
Figure 4 for Efficient LiDAR data compression for embedded V2I or V2V data handling

LiDAR are increasingly being used in intelligent vehicles (IV) or intelligent transportation systems (ITS). Storage and transmission of data generated by LiDAR sensors are one of the most challenging aspects of their deployment. In this paper we present a method that can be used to efficiently compress LiDAR data in order to facilitate storage and transmission in V2V or V2I applications. This method can be used to perform lossless or lossy compression and is specifically designed for embedded applications with low processing power. This method is also designed to be easily applicable to existing processing chains by keeping the structure of the data stream intact. We benchmarked our method using several publicly available datasets and compared it with state-of-the-art LiDAR data compression methods from the literature.

Viaarxiv icon

LiDAR point clouds correction acquired from a moving car based on CAN-bus data

Jun 19, 2017
Pierre Merriaux, Yohan Dupuis, Rémi Boutteau, Pascal Vasseur, Xavier Savatier

Figure 1 for LiDAR point clouds correction acquired from a moving car based on CAN-bus data
Figure 2 for LiDAR point clouds correction acquired from a moving car based on CAN-bus data
Figure 3 for LiDAR point clouds correction acquired from a moving car based on CAN-bus data
Figure 4 for LiDAR point clouds correction acquired from a moving car based on CAN-bus data

In this paper, we investigate the impact of different kind of car trajectories on LiDAR scans. In fact, LiDAR scanning speeds are considerably slower than car speeds introducing distortions. We propose a method to overcome this issue as well as new metrics based on CAN bus data. Our results suggest that the vehicle trajectory should be taken into account when building 3D large-scale maps from a LiDAR mounted on a moving vehicle.

Viaarxiv icon