Deep autoencoders have become a fundamental tool in various machine learning applications, ranging from dimensionality reduction and reduced order modeling of partial differential equations to anomaly detection and neural machine translation. Despite their empirical success, a solid theoretical foundation for their expressiveness remains elusive, particularly when compared to classical projection-based techniques. In this work, we aim to take a step forward in this direction by presenting a comprehensive analysis of what we refer to as symmetric autoencoders, a broad class of deep learning architectures ubiquitous in the literature. Specifically, we introduce a formal distinction between different classes of symmetric architectures, analyzing their strengths and limitations from a mathematical perspective. For instance, we show that the reconstruction error of symmetric autoencoders with orthonormality constraints can be understood by leveraging the well-renowned Eckart-Young-Schmidt (EYS) theorem. As a byproduct of our analysis, we end up developing the EYS initialization strategy for symmetric autoencoders, which is based on an iterated application of the Singular Value Decomposition (SVD). To validate our findings, we conduct a series of numerical experiments where we benchmark our proposal against conventional deep autoencoders, discussing the importance of model design and initialization.