Abstract:In many scenarios of binary classification, only positive instances are provided in the training data, leaving the rest of the data unlabeled. This setup, known as positive-unlabeled (PU) learning, is addressed here with a network flow-based method which utilizes pairwise similarities between samples. The method we propose here, 2-HNC, leverages Hochbaum's Normalized Cut (HNC) and the set of solutions it provides by solving a parametric minimum cut problem. The set of solutions, that are nested partitions of the samples into two sets, correspond to varying tradeoff values between the two goals: high intra-similarity inside the sets and low inter-similarity between the two sets. This nested sequence is utilized here to deliver a ranking of unlabeled samples by their likelihood of being negative. Building on this insight, our method, 2-HNC, proceeds in two stages. The first stage generates this ranking without assuming any negative labels, using a problem formulation that is constrained only on positive labeled samples. The second stage augments the positive set with likely-negative samples and recomputes the classification. The final label prediction selects among all generated partitions in both stages, the one that delivers a positive class proportion, closest to a prior estimate of this quantity, which is assumed to be given. Extensive experiments across synthetic and real datasets show that 2-HNC yields strong performance and often surpasses existing state-of-the-art algorithms.
Abstract:We consider here a classification method that balances two objectives: large similarity within the samples in the cluster, and large dissimilarity between the cluster and its complement. The method, referred to as HNC or SNC, requires seed nodes, or labeled samples, at least one of which is in the cluster and at least one in the complement. Other than that, the method relies only on the relationship between the samples. The contribution here is the new method in the presence of noisy labels, based on HNC, called Confidence HNC, in which we introduce confidence weights that allow the given labels of labeled samples to be violated, with a penalty that reflects the perceived correctness of each given label. If a label is violated then it is interpreted that the label was noisy. The method involves a representation of the problem as a graph problem with hyperparameters that is solved very efficiently by the network flow technique of parametric cut. We compare the performance of the new method with leading algorithms on both real and synthetic data with noisy labels and demonstrate that it delivers improved performance in terms of classification accuracy as well as noise detection capability.