Abstract:Gaia Data Release 3 (DR3) published for the first time epoch photometry, BP/RP (XP) low-resolution mean spectra, and supervised classification results for millions of variable sources. This extensive dataset offers a unique opportunity to study their variability by combining multiple Gaia data products. In preparation for DR4, we propose and evaluate a machine learning methodology capable of ingesting multiple Gaia data products to achieve an unsupervised classification of stellar and quasar variability. A dataset of 4 million Gaia DR3 sources is used to train three variational autoencoders (VAE), which are artificial neural networks (ANNs) designed for data compression and generation. One VAE is trained on Gaia XP low-resolution spectra, another on a novel approach based on the distribution of magnitude differences in the Gaia G band, and the third on folded Gaia G band light curves. Each Gaia source is compressed into 15 numbers, representing the coordinates in a 15-dimensional latent space generated by combining the outputs of these three models. The learned latent representation produced by the ANN effectively distinguishes between the main variability classes present in Gaia DR3, as demonstrated through both supervised and unsupervised classification analysis of the latent space. The results highlight a strong synergy between light curves and low-resolution spectral data, emphasising the benefits of combining the different Gaia data products. A two-dimensional projection of the latent variables reveals numerous overdensities, most of which strongly correlate with astrophysical properties, showing the potential of this latent space for astrophysical discovery. We show that the properties of our novel latent representation make it highly valuable for variability analysis tasks, including classification, clustering and outlier detection.
Abstract:To prepare for the analyses of the future PLATO light curves, we develop a deep learning model, Panopticon, to detect transits in high precision photometric light curves. Since PLATO's main objective is the detection of temperate Earth-size planets around solar-type stars, the code is designed to detect individual transit events. The filtering step, required by conventional detection methods, can affect the transit, which could be an issue for long and shallow transits. To protect transit shape and depth, the code is also designed to work on unfiltered light curves. We trained the model on a set of simulated PLATO light curves in which we injected, at pixel level, either planetary, eclipsing binary, or background eclipsing binary signals. We also include a variety of noises in our data, such as granulation, stellar spots or cosmic rays. The approach is able to recover 90% of our test population, including more than 25% of the Earth-analogs, even in the unfiltered light curves. The model also recovers the transits irrespective of the orbital period, and is able to retrieve transits on a unique event basis. These figures are obtained when accepting a false alarm rate of 1%. When keeping the false alarm rate low (<0.01%), it is still able to recover more than 85% of the transit signals. Any transit deeper than 180ppm is essentially guaranteed to be recovered. This method is able to recover transits on a unique event basis, and does so with a low false alarm rate. Thanks to light curves being one-dimensional, model training is fast, on the order of a few hours per model. This speed in training and inference, coupled to the recovery effectiveness and precision of the model make it an ideal tool to complement, or be used ahead of, classical approaches.