Gaussian mixture models have been extensively used and enhanced in the surveillance domain because of their ability to adaptively describe multimodal distributions in real-time with low memory requirements. Nevertheless, they still often suffer from the problem of converging to poor solutions if the main mode stretches and thus over-dominates weaker distributions. We propose complementary background models for background modelling and to detect static and moving objects in crowded video sequences.
Per pixel adaptive Gaussian mixture models (GMMs) have become a popular choice for the detection of change in the video surveillance domain because of their ability to cope with many challenges characteristic for surveillance systems in real time with low memory requirements. Since their first introduction in the surveillance domain, GMM has been enhanced in many directions. In this paper, we present a study of some relevant GMM approaches and analyze their underlying assumptions and design decisions. Based on this paper, we show how these systems can be further improved by means of a variance controlling scheme and the incorporation of region analysis-based feedback. The proposed system has been thoroughly evaluated using the extensive data set of the IEEE Workshop on Change Detection, showing an outranking performance in comparison with state-of-the-art methods
Heras Evangelio, R., Pätzold, M., Keller, I., Sikora, T., Adaptively Splitted GMM with Feedback Improvement for the Task of Background Subtraction, IEEE Transactions on Information Forensics & Security, 2014
In this paper we propose the use of complementary background models for the detection of static and moving objects in crowded video sequences. One model is devoted to accurately detect motion, while the other aims to achieve a representation of the empty scene. The differences in foreground detection of the complementary models are used to identify new static regions. A subsequent analysis of the detected regions is used to ascertain if an object was placed in or removed from the scene. Static objects are prevented from being incorporated into the empty scene model. Removed objects are rapidly dropped from both models. In this way, we build a very precise model of the empty scene and improve the foreground segmentation results of a single background model.
Designing static object detection systems that are able to incorporate user interaction conveys a great benefit in many surveillance applications, since some correctly detected static objects can be considered to have no interest by a human operator. Interactive systems allow the user to include these decisions into the system, making automated surveillance systems more attractive and comfortable to use. In this paper we present a system for the detection of static objects that, based on the detection of a dual background model, classifies pixels by means of a finite-state machine. The state machine provides the meaning for the interpretation of the results obtained from background subtraction and it can be optionally used to integrate user input. The system can thus be used both in an automatic and an interactive manner without requiring any expert knowledge from the user.