The question of whether to use one classifier or a combination of classifiers is a central topic in Machine Learning. We propose here a method for finding an optimal linear combination of classifiers derived from a bias-variance framework for the classification task.
Cross-border security is of topmost priority for societies. Economies lose billions each year due to counterfeiters and other threats. Security checkpoints equipped with X-ray Security Systems (NIIS-Non-Intrusive Inspection Systems) like airports, ports, border control and customs authorities tackle the myriad of threats by using NIIS to inspect bags, air, land, sea and rail cargo, and vehicles. The reliance on the X-ray scanning systems necessitates their continuous 24/7 functioning being provided for. Hence the need for their working condition being closely monitored and preemptive actions being taken to reduce the overall X-ray systems downtime. In this paper, we present a predictive maintenance decision support system, abbreviated as PMT4NIIS (Predictive Maintenance Tool for Non-Intrusive Inspection Systems), which is a kind of augmented analytics platforms that provides real-time AI-generated warnings for upcoming risk of system malfunctioning leading to possible downtime. The industrial platform is the basis of a 24/7 Service Desk and Monitoring center for the working condition of various X-ray Security Systems.
Implied posterior probability of a given model (say, Support Vector Machines (SVM)) at a point $\bf{x}$ is an estimate of the class posterior probability pertaining to the class of functions of the model applied to a given dataset. It can be regarded as a score (or estimate) for the true posterior probability, which can then be calibrated/mapped onto expected (non-implied by the model) posterior probability implied by the underlying functions, which have generated the data. In this tutorial we discuss how to compute implied posterior probabilities of SVMs for the binary classification case as well as how to calibrate them via a standard method of isotonic regression.
One of the central themes in the classification task is the estimation of class posterior probability at a new point $\bf{x}$. The vast majority of classifiers output a score for $\bf{x}$, which is monotonically related to the posterior probability via an unknown relationship. There are many attempts in the literature to estimate this latter relationship. Here, we provide a way to estimate the posterior probability without resorting to using classification scores. Instead, we vary the prior probabilities of classes in order to derive the ratio of pdf's at point $\bf{x}$, which is directly used to determine class posterior probabilities. We consider here the binary classification problem.