Abstract:This study advances the Variational Autoencoder (VAE) framework by addressing challenges in Independent Component Analysis (ICA) under both determined and underdetermined conditions, focusing on enhancing the independence and interpretability of latent variables. Traditional VAEs map observed data to latent variables and back via an encoder-decoder architecture, but struggle with underdetermined ICA where the number of latent variables exceeds observed signals. The proposed Half Adversarial VAE (Half-AVAE) builds on the encoder-free Half-VAE framework, eliminating explicit inverse mapping to tackle underdetermined scenarios. By integrating adversarial networks and External Enhancement (EE) terms, Half-AVAE promotes mutual independence among latent dimensions, achieving factorized and interpretable representations. Experiments with synthetic signals demonstrate that Half-AVAE outperforms baseline models, including GP-AVAE and Half-VAE, in recovering independent components under underdetermined conditions, as evidenced by lower root mean square errors. The study highlights the flexibility of VAEs in variational inference, showing that encoder omission, combined with adversarial training and structured priors, enables effective solutions for complex ICA tasks, advancing applications in disentanglement, causal inference, and generative modeling.
Abstract:Inference and inverse problems are closely related concepts, both fundamentally involving the deduction of unknown causes or parameters from observed data. Bayesian inference, a powerful class of methods, is often employed to solve a variety of problems, including those related to causal inference. Variational inference, a subset of Bayesian inference, is primarily used to efficiently approximate complex posterior distributions. Variational Autoencoders (VAEs), which combine variational inference with deep learning, have become widely applied across various domains. This study explores the potential of VAEs for solving inverse problems, such as Independent Component Analysis (ICA), without relying on an explicit inverse mapping process. Unlike other VAE-based ICA methods, this approach discards the encoder in the VAE architecture, directly setting the latent variables as trainable parameters. In other words, the latent variables are no longer outputs of the encoder but are instead optimized directly through the objective function to converge to appropriate values. We find that, with a suitable prior setup, the latent variables, represented by trainable parameters, can exhibit mutually independent properties as the parameters converge, all without the need for an encoding process. This approach, referred to as the Half-VAE, bypasses the inverse mapping process by eliminating the encoder. This study demonstrates the feasibility of using the Half-VAE to solve ICA without the need for an explicit inverse mapping process.