Abstract:Generating synthetic tabular data under severe class imbalance is essential for domains where rare but high-impact events drive decision-making. However, most generative models either overlook minority groups or fail to produce samples that are useful for downstream learning. We introduce CTTVAE, a Conditional Transformer-based Tabular Variational Autoencoder equipped with two complementary mechanisms: (i) a class-aware triplet margin loss that restructures the latent space for sharper intra-class compactness and inter-class separation, and (ii) a training-by-sampling strategy that adaptively increases exposure to underrepresented groups. Together, these components form CTTVAE+TBS, a framework that consistently yields more representative and utility-aligned samples without destabilizing training. Across six real-world benchmarks, CTTVAE+TBS achieves the strongest downstream utility on minority classes, often surpassing models trained on the original imbalanced data while maintaining competitive fidelity and bridging the gap for privacy for interpolation-based sampling methods and deep generative methods. Ablation studies further confirm that both latent structuring and targeted sampling contribute to these gains. By explicitly prioritizing downstream performance in rare categories, CTTVAE+TBS provides a robust and interpretable solution for conditional tabular data generation, with direct applicability to industries such as healthcare, fraud detection, and predictive maintenance where even small gains in minority cases can be critical.
Abstract:The characterization of exoplanetary atmospheres through spectral analysis is a complex challenge. The NeurIPS 2024 Ariel Data Challenge, in collaboration with the European Space Agency's (ESA) Ariel mission, provided an opportunity to explore machine learning techniques for extracting atmospheric compositions from simulated spectral data. In this work, we focus on a data-centric business approach, prioritizing generalization over competition-specific optimization. We briefly outline multiple experimental axes, including feature extraction, signal transformation, and heteroskedastic uncertainty modeling. Our experiments demonstrate that uncertainty estimation plays a crucial role in the Gaussian Log-Likelihood (GLL) score, impacting performance by several percentage points. Despite improving the GLL score by 11%, our results highlight the inherent limitations of tabular modeling and feature engineering for this task, as well as the constraints of a business-driven approach within a Kaggle-style competition framework. Our findings emphasize the trade-offs between model simplicity, interpretability, and generalization in astrophysical data analysis.