Most of the current research in computer vision is focused on working with single images without taking in account temporal information. We present a probabilistic non-parametric model that mixes multiple information cues from devices to segment regions that contain moving objects in image sequences. We prepared an experimental setup to show the importance of using previous information for obtaining an accurate segmentation result, using a novel dataset that provides sequences in the RGBDT space. We label the detected regions ts with a state-of-the-art human detector. Each one of the detected regions is at least marked as human once.
The problem of detecting changes in a scene and segmenting the foreground from background is still challenging, despite previous work. Moreover, new RGBD capturing devices include depth cues, which could be incorporated to improve foreground segmentation. In this work, we present a new nonparametric approach where a unified model mixes the device multiple information cues. In order to unify all the device channel cues, a new probabilistic depth data model is also proposed where we show how handle the inaccurate data to improve foreground segmentation. A new RGBD video dataset is presented in order to introduce a new standard for comparison purposes of this kind of algorithms. Results show that the proposed approach can handle several practical situations and obtain good results in all cases.