Radio Frequency Interference (RFI) detection and mitigation is critical for enabling and maximising the scientific output of radio telescopes. The emergence of machine learning methods capable of handling large datasets has led to their application in radio astronomy, particularly in RFI detection. Spiking Neural Networks (SNNs), inspired by biological systems, are well-suited for processing spatio-temporal data. This study introduces the first application of SNNs to an astronomical data-processing task, specifically RFI detection. We adapt the nearest-latent-neighbours (NLN) algorithm and auto-encoder architecture proposed by previous authors to SNN execution by direct ANN2SNN conversion, enabling simplified downstream RFI detection by sampling the naturally varying latent space from the internal spiking neurons. We evaluate performance with the simulated HERA telescope and hand-labelled LOFAR dataset that the original authors provided. We additionally evaluate performance with a new MeerKAT-inspired simulation dataset. This dataset focuses on satellite-based RFI, an increasingly important class of RFI and is, therefore, an additional contribution. Our SNN approach remains competitive with the original NLN algorithm and AOFlagger in AUROC, AUPRC and F1 scores for the HERA dataset but exhibits difficulty in the LOFAR and MeerKAT datasets. However, our method maintains this performance while completely removing the compute and memory-intense latent sampling step found in NLN. This work demonstrates the viability of SNNs as a promising avenue for machine-learning-based RFI detection in radio telescopes by establishing a minimal performance baseline on traditional and nascent satellite-based RFI sources and is the first work to our knowledge to apply SNNs in astronomy.
Neuromorphic computing and spiking neural networks aim to leverage biological inspiration to achieve greater energy efficiency and computational power beyond traditional von Neumann architectured machines. In particular, spiking neural networks hold the potential to advance artificial intelligence as the basis of third-generation neural networks. Aided by developments in memristive and compute-in-memory technologies, neuromorphic computing hardware is transitioning from laboratory prototype devices to commercial chipsets; ushering in an era of low-power computing. As a nexus of biological, computing, and material sciences, the literature surrounding these concepts is vast, varied, and somewhat distinct from artificial neural network sources. This article uses bibliometric analysis to survey the last 22 years of literature, seeking to establish trends in publication and citation volumes (III-A); analyze impactful authors, journals and institutions (III-B); generate an introductory reading list (III-C); survey collaborations between countries, institutes and authors (III-D), and to analyze changes in research topics over the years (III-E). We analyze literature data from the Clarivate Web of Science using standard bibliometric methods. By briefly introducing the most impactful literature in this field from the last two decades, we encourage AI practitioners and researchers to look beyond contemporary technologies toward a potentially spiking future of computing.