Topic:Video Background Subtraction
What is Video Background Subtraction? Video background subtraction is the process of separating moving objects from the static background in video sequences.
Papers and Code
Jun 17, 2025
Abstract:In general, background subtraction-based methods are used to detect moving objects in visual tracking applications. In this paper, we employed a background subtraction-based scheme to detect the temporarily stationary objects. We proposed two schemes for stationary object detection, and we compare those in terms of detection performance and computational complexity. In the first approach, we used a single background, and in the second approach, we used dual backgrounds, generated with different learning rates, in order to detect temporarily stopped objects. Finally, we used normalized cross correlation (NCC) based image comparison to monitor and track the detected stationary object in a video scene. The proposed method is robust with partial occlusion, short-time fully occlusion, and illumination changes, and it can operate in real time.
* Smart Media Journal 1 (2012) 48-55
* 8 pages, 6 figures
Via

May 21, 2025
Abstract:Frequently, multiple entities (methods, algorithms, procedures, solutions, etc.) can be developed for a common task and applied across various domains that differ in the distribution of scenarios encountered. For example, in computer vision, the input data provided to image analysis methods depend on the type of sensor used, its location, and the scene content. However, a crucial difficulty remains: can we predict which entities will perform best in a new domain based on assessments on known domains, without having to carry out new and costly evaluations? This paper presents an original methodology to address this question, in a leave-one-domain-out fashion, for various application-specific preferences. We illustrate its use with 30 strategies to predict the rankings of 40 entities (unsupervised background subtraction methods) on 53 domains (videos).
Via

May 12, 2025
Abstract:Background subtraction (BGS) is utilized to detect moving objects in a video and is commonly employed at the onset of object tracking and human recognition processes. Nevertheless, existing BGS techniques utilizing deep learning still encounter challenges with various background noises in videos, including variations in lighting, shifts in camera angles, and disturbances like air turbulence or swaying trees. To address this problem, we design a spiking autoencoder network, termed SAEN-BGS, based on noise resilience and time-sequence sensitivity of spiking neural networks (SNNs) to enhance the separation of foreground and background. To eliminate unnecessary background noise and preserve the important foreground elements, we begin by creating the continuous spiking conv-and-dconv block, which serves as the fundamental building block for the decoder in SAEN-BGS. Moreover, in striving for enhanced energy efficiency, we introduce a novel self-distillation spiking supervised learning method grounded in ANN-to-SNN frameworks, resulting in decreased power consumption. In extensive experiments conducted on CDnet-2014 and DAVIS-2016 datasets, our approach demonstrates superior segmentation performance relative to other baseline methods, even when challenged by complex scenarios with dynamic backgrounds.
* Accepted by Pattern Recognition
Via

Dec 31, 2024
Abstract:Robust matrix completion (RMC) is a widely used machine learning tool that simultaneously tackles two critical issues in low-rank data analysis: missing data entries and extreme outliers. This paper proposes a novel scalable and learnable non-convex approach, coined Learned Robust Matrix Completion (LRMC), for large-scale RMC problems. LRMC enjoys low computational complexity with linear convergence. Motivated by the proposed theorem, the free parameters of LRMC can be effectively learned via deep unfolding to achieve optimum performance. Furthermore, this paper proposes a flexible feedforward-recurrent-mixed neural network framework that extends deep unfolding from fix-number iterations to infinite iterations. The superior empirical performance of LRMC is verified with extensive experiments against state-of-the-art on synthetic datasets and real applications, including video background subtraction, ultrasound imaging, face modeling, and cloud removal from satellite imagery.
* arXiv admin note: substantial text overlap with arXiv:2110.05649
Via

Feb 01, 2024
Abstract:Video foreground segmentation (VFS) is an important computer vision task wherein one aims to segment the objects under motion from the background. Most of the current methods are image-based, i.e., rely only on spatial cues while ignoring motion cues. Therefore, they tend to overfit the training data and don't generalize well to out-of-domain (OOD) distribution. To solve the above problem, prior works exploited several cues such as optical flow, background subtraction mask, etc. However, having a video data with annotations like optical flow is a challenging task. In this paper, we utilize the temporal information and the spatial cues from the video data to improve OOD performance. However, the challenge lies in how we model the temporal information given the video data in an interpretable way creates a very noticeable difference. We therefore devise a strategy that integrates the temporal context of the video in the development of VFS. Our approach give rise to deep learning architectures, namely MUSTAN1 and MUSTAN2 and they are based on the idea of multi-scale temporal context as an attention, i.e., aids our models to learn better representations that are beneficial for VFS. Further, we introduce a new video dataset, namely Indoor Surveillance Dataset (ISD) for VFS. It has multiple annotations on a frame level such as foreground binary mask, depth map, and instance semantic annotations. Therefore, ISD can benefit other computer vision tasks. We validate the efficacy of our architectures and compare the performance with baselines. We demonstrate that proposed methods significantly outperform the benchmark methods on OOD. In addition, the performance of MUSTAN2 is significantly improved on certain video categories on OOD data due to ISD.
* 10 pages, 8 figures
Via

Sep 27, 2023
Abstract:Video background subtraction is one of the fundamental problems in computer vision that aims to segment all moving objects. Robust principal component analysis has been identified as a promising unsupervised paradigm for background subtraction tasks in the last decade thanks to its competitive performance in a number of benchmark datasets. Tensor robust principal component analysis variations have improved background subtraction performance further. However, because moving object pixels in the sparse component are treated independently and do not have to adhere to spatial-temporal structured-sparsity constraints, performance is reduced for sequences with dynamic backgrounds, camouflaged, and camera jitter problems. In this work, we present a spatial-temporal regularized tensor sparse RPCA algorithm for precise background subtraction. Within the sparse component, we impose spatial-temporal regularizations in the form of normalized graph-Laplacian matrices. To do this, we build two graphs, one across the input tensor spatial locations and the other across its frontal slices in the time domain. While maximizing the objective function, we compel the tensor sparse component to serve as the spatiotemporal eigenvectors of the graph-Laplacian matrices. The disconnected moving object pixels in the sparse component are preserved by the proposed graph-based regularizations since they both comprise of spatiotemporal subspace-based structure. Additionally, we propose a unique objective function that employs batch and online-based optimization methods to jointly maximize the background-foreground and spatial-temporal regularization components. Experiments are performed on six publicly available background subtraction datasets that demonstrate the superior performance of the proposed algorithm compared to several existing methods. Our source code will be available very soon.
* Under review
Via

Jan 05, 2024
Abstract:Inland waterways are critical for freight movement, but limited means exist for monitoring their performance and usage by freight-carrying vessels, e.g., barges. While methods to track vessels, e.g., tug and tow boats, are publicly available through Automatic Identification Systems (AIS), ways to track freight tonnages and commodity flows carried on barges along these critical marine highways are non-existent, especially in real-time settings. This paper develops a method to detect barge traffic on inland waterways using existing traffic cameras with opportune viewing angles. Deep learning models, specifically, You Only Look Once (YOLO), Single Shot MultiBox Detector (SSD), and EfficientDet are employed. The model detects the presence of vessels and/or barges from video and performs a classification (no vessel or barge, vessel without barge, vessel with barge, and barge). A dataset of 331 annotated images was collected from five existing traffic cameras along the Mississippi and Ohio Rivers for model development. YOLOv8 achieves an F1-score of 96%, outperforming YOLOv5, SSD, and EfficientDet models with 86%, 79%, and 77% respectively. Sensitivity analysis was carried out regarding weather conditions (fog and rain) and location (Mississippi and Ohio rivers). A background subtraction technique was used to normalize video images across the various locations for the location sensitivity analysis. This model can be used to detect the presence of barges along river segments, which can be used for anonymous bulk commodity tracking and monitoring. Such data is valuable for long-range transportation planning efforts carried out by public transportation agencies, in addition to operational and maintenance planning conducted by federal agencies such as the US Army Corp of Engineers.
Via

Dec 04, 2023
Abstract:The cable-based arrestment systems are integral to the launch and recovery of aircraft onboard carriers and on expeditionary land-based installations. These modern arrestment systems rely on various mechanisms to absorb energy from an aircraft during an arrestment cycle to bring the aircraft to a full stop. One of the primary components of this system is the cable interface to the engine. The formation of slack in the cable at this interface can result in reduced efficiency and drives maintenance efforts to remove the slack prior to continued operations. In this paper, a machine vision based slack detection system is presented. A situational awareness camera is utilized to collect video data of the cable interface region, machine vision algorithms are applied to reduce noise, remove background clutter, focus on regions of interest, and detect changes in the image representative of slack formations. Some algorithms employed in this system include bilateral image filters, least squares polynomial fit, Canny Edge Detection, K-Means clustering, Gaussian Mixture-based Background/Foreground Segmentation for background subtraction, Hough Circle Transforms, and Hough line Transforms. The resulting detections are filtered and highlighted to create an indication to the shipboard operator of the presence of slack and a need for a maintenance action. A user interface was designed to provide operators with an easy method to redefine regions of interest and adjust the methods to specific locations. The algorithms were validated on shipboard footage and were able to accurately identify slack with minimal false positives.
* 6 pages, 9 figures, Published in the Proceedings of the ASNE 2023
Technology, Systems & Ships Symposium. Reproduced with permission from the
American Society of Naval Engineers. NAVAIR Public Release 2023-31
Distribution Statement A - "Approved for public release; distribution is
unlimited"
Via

Mar 06, 2023
Abstract:Background subtraction is a fundamental task in computer vision with numerous real-world applications, ranging from object tracking to video surveillance. Dynamic backgrounds poses a significant challenge here. Supervised deep learning-based techniques are currently considered state-of-the-art for this task. However, these methods require pixel-wise ground-truth labels, which can be time-consuming and expensive. In this work, we propose a weakly supervised framework that can perform background subtraction without requiring per-pixel ground-truth labels. Our framework is trained on a moving object-free sequence of images and comprises two networks. The first network is an autoencoder that generates background images and prepares dynamic background images for training the second network. The dynamic background images are obtained by thresholding the background-subtracted images. The second network is a U-Net that uses the same object-free video for training and the dynamic background images as pixel-wise ground-truth labels. During the test phase, the input images are processed by the autoencoder and U-Net, which generate background and dynamic background images, respectively. The dynamic background image helps remove dynamic motion from the background-subtracted image, enabling us to obtain a foreground image that is free of dynamic artifacts. To demonstrate the effectiveness of our method, we conducted experiments on selected categories of the CDnet 2014 dataset and the I2R dataset. Our method outperformed all top-ranked unsupervised methods. We also achieved better results than one of the two existing weakly supervised methods, and our performance was similar to the other. Our proposed method is online, real-time, efficient, and requires minimal frame-level annotation, making it suitable for a wide range of real-world applications.
* 10 pages, 3 figures
Via

Mar 26, 2023
Abstract:Background subtraction (BGS) aims to extract all moving objects in the video frames to obtain binary foreground segmentation masks. Deep learning has been widely used in this field. Compared with supervised-based BGS methods, unsupervised methods have better generalization. However, previous unsupervised deep learning BGS algorithms perform poorly in sophisticated scenarios such as shadows or night lights, and they cannot detect objects outside the pre-defined categories. In this work, we propose an unsupervised BGS algorithm based on zero-shot object detection called Zero-shot Background Subtraction (ZBS). The proposed method fully utilizes the advantages of zero-shot object detection to build the open-vocabulary instance-level background model. Based on it, the foreground can be effectively extracted by comparing the detection results of new frames with the background model. ZBS performs well for sophisticated scenarios, and it has rich and extensible categories. Furthermore, our method can easily generalize to other tasks, such as abandoned object detection in unseen environments. We experimentally show that ZBS surpasses state-of-the-art unsupervised BGS methods by 4.70% F-Measure on the CDnet 2014 dataset. The code is released at https://github.com/CASIA-IVA-Lab/ZBS.
* Accepted by CVPR 2023. Code is available at
https://github.com/CASIA-IVA-Lab/ZBS
Via
