Abstract:We present the Universal Latent Homeomorphic Manifold (ULHM), a framework that unifies semantic representations (e.g., human descriptions, diagnostic labels) and observation-driven machine representations (e.g., pixel intensities, sensor readings) into a single latent structure. Despite originating from fundamentally different pathways, both modalities capture the same underlying reality. We establish \emph{homeomorphism}, a continuous bijection preserving topological structure, as the mathematical criterion for determining when latent manifolds induced by different semantic-observation pairs can be rigorously unified. This criterion provides theoretical guarantees for three critical applications: (1) semantic-guided sparse recovery from incomplete observations, (2) cross-domain transfer learning with verified structural compatibility, and (3) zero-shot compositional learning via valid transfer from semantic to observation space. Our framework learns continuous manifold-to-manifold transformations through conditional variational inference, avoiding brittle point-to-point mappings. We develop practical verification algorithms, including trust, continuity, and Wasserstein distance metrics, that empirically validate homeomorphic structure from finite samples. Experiments demonstrate: (1) sparse image recovery from 5\% of CelebA pixels and MNIST digit reconstruction at multiple sparsity levels, (2) cross-domain classifier transfer achieving 86.73\% accuracy from MNIST to Fashion-MNIST without retraining, and (3) zero-shot classification on unseen classes achieving 89.47\% on MNIST, 84.70\% on Fashion-MNIST, and 78.76\% on CIFAR-10. Critically, the homeomorphism criterion correctly rejects incompatible datasets, preventing invalid unification and providing a feasible way to principled decomposition of general foundation models into verified domain-specific components.
Abstract:This study aims to understand the factors that resulted in under-five children's malnutrition from the Multiple Indicator Cluster (MICS-2019) nationwide surveys and classify different malnutrition stages based on the four well-established machine learning algorithms, namely - Decision Tree (DT), Random Forest (RF), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP) neural network. Accuracy, precision, recall, and F1 scores are obtained to evaluate the performance of each model. The statistical Pearson correlation coefficient analysis is also done to understand the significant factors related to a child's malnutrition. The eligible data sample for analysis was 21,858 among 24,686 samples from the dataset. Satisfactory and insightful results were obtained in each case and, the RF and MLP performed extraordinarily well. For RF, the accuracy was 98.55%, average precision 98.3%, recall value 95.68%, and F1 score 97.13%. For MLP, the accuracy was 98.69%, average precision 97.62%, recall 90.96%, and F1 score of 97.39%. From the Pearson co-efficient, all negative correlation results are enlisted, and the most significant impacts are found for the WAZ2 (Weight for age Z score WHO) (-0.828"), WHZ2 (Weight for height Z score WHO) (-0.706"), ZBMI (BMI Z score WHO) (-0.656"), BD3 (whether child is still being breastfed) (-0.59"), HAZ2 (Height for age Z score WHO) (-0.452"), CA1 (whether child had diarrhea in last 2 weeks) (-0.34"), Windex5 (Wealth index quantile) (-0.161"), melevel (Mother's education) (-0.132"), and CA14/CA16/CA17 (whether child had illness with fever, cough, and breathing) (-0.04) in successive order.