Abstract:We present a comprehensive analysis of the digest2 parameters for candidates of the Near-Earth Object Confirmation Page (NEOCP) that were reported between 2019 and 2024. Our study proposes methods for significantly reducing the inclusion of non-NEO objects on the NEOCP. Despite the substantial increase in near-Earth object (NEO) discoveries in recent years, only about half of the NEOCP candidates are ultimately confirmed as NEOs. Therefore, much observing time is spent following up on non-NEOs. Furthermore, approximately 11% of the candidates remain unconfirmed because the follow-up observations are insufficient. These are nearly 600 cases per year. To reduce false positives and minimize wasted resources on non-NEOs, we refine the posting criteria for NEOCP based on a detailed analysis of all digest2 scores. We investigated 30 distinct digest2 parameter categories for candidates that were confirmed as NEOs and non-NEOs. From this analysis, we derived a filtering mechanism based on selected digest2 parameters that were able to exclude 20% of the non-NEOs from the NEOCP while maintaining a minimal loss of true NEOs. We also investigated the application of four machine-learning (ML) techniques, that is, the gradient-boosting machine (GBM), the random forest (RF) classifier, the stochastic gradient descent (SGD) classifier, and neural networks (NN) to classify NEOCP candidates as NEOs or non-NEOs. Based on digest2 parameters as input, our ML models achieved a precision of approximately 95% in distinguishing between NEOs and non-NEOs. Results. Combining the digest2 parameter filter with an ML-based classification model, we demonstrate a significant reduction in non-NEOs on the NEOCP that exceeds 80%, while limiting the loss of NEO discovery tracklets to 5.5%. Importantly, we show that most follow-up tracklets of initially misclassified NEOs are later correctly identified as NEOs.
Abstract:The Legacy Survey of Space and Time, to be conducted with the Vera C. Rubin Observatory, is poised to revolutionize our understanding of the Solar System by providing an unprecedented wealth of data on various objects, including the elusive interstellar objects (ISOs). Detecting and classifying ISOs is crucial for studying the composition and diversity of materials from other planetary systems. However, the rarity and brief observation windows of ISOs, coupled with the vast quantities of data to be generated by LSST, create significant challenges for their identification and classification. This study aims to address these challenges by exploring the application of machine learning algorithms to the automated classification of ISO tracklets in simulated LSST data. We employed various machine learning algorithms, including random forests (RFs), stochastic gradient descent (SGD), gradient boosting machines (GBMs), and neural networks (NNs), to classify ISO tracklets in simulated LSST data. We demonstrate that GBM and RF algorithms outperform SGD and NN algorithms in accurately distinguishing ISOs from other Solar System objects. RF analysis shows that many derived Digest2 values are more important than direct observables in classifying ISOs from the LSST tracklets. The GBM model achieves the highest precision, recall, and F1 score, with values of 0.9987, 0.9986, and 0.9987, respectively. These findings lay the foundation for the development of an efficient and robust automated system for ISO discovery using LSST data, paving the way for a deeper understanding of the materials and processes that shape planetary systems beyond our own. The integration of our proposed machine learning approach into the LSST data processing pipeline will optimize the survey's potential for identifying these rare and valuable objects, enabling timely follow-up observations and further characterization.