Abstract:We present Pulse3DFace, the first dataset of its kind for estimating 3D blood pulsation maps. These maps can be used to develop models of dynamic facial blood pulsation, enabling the creation of synthetic video data to improve and validate remote pulse estimation methods via photoplethysmography imaging. Additionally, the dataset facilitates research into novel multi-view-based approaches for mitigating illumination effects in blood pulsation analysis. Pulse3DFace consists of raw videos from 15 subjects recorded at 30 Hz with an RGB camera from 23 viewpoints, blood pulse reference measurements, and facial 3D scans generated using monocular structure-from-motion techniques. It also includes processed 3D pulsation maps compatible with the texture space of the 3D head model FLAME. These maps provide signal-to-noise ratio, local pulse amplitude, phase information, and supplementary data. We offer a comprehensive evaluation of the dataset's illumination conditions, map consistency, and its ability to capture physiologically meaningful features in the facial and neck skin regions.
Abstract:Depth cameras are an interesting modality for capturing vital signs such as respiratory rate. Plenty approaches exist to extract vital signs in a controlled setting, but in order to apply them more flexibly for example in multi-camera settings, a simulated environment is needed to generate enough data for training and testing of new algorithms. We show first results of a 3D-rendering simulation pipeline that focuses on different noise models in order to generate realistic, depth-camera based respiratory signals using both synthetic and real respiratory signals as a baseline. While most noise can be accurately modelled as Gaussian in this context, we can show that as soon as the available image resolution is too low, the differences between different noise models surface.



Abstract:In this proof of concept, we use Computer Vision (CV) methods to extract pose information out of exercise videos. We then employ a modified version of Dynamic Time Warping (DTW) to calculate the deviation from a gold standard execution of the exercise. Specifically, we calculate the distance between each body part individually to get a more precise measure for exercise accuracy. We can show that exercise mistakes are clearly visible, identifiable and localizable through this metric.