Regularization techniques such as L2 regularization (Weight Decay) and Dropout are fundamental to training deep neural networks, yet their underlying physical mechanisms regarding feature frequency selection remain poorly understood. In this work, we investigate the Spectral Bias of modern Convolutional Neural Networks (CNNs). We introduce a Visual Diagnostic Framework to track the dynamic evolution of weight frequencies during training and propose a novel metric, the Spectral Suppression Ratio (SSR), to quantify the "low-pass filtering" intensity of different regularizers. By addressing the aliasing issue in small kernels (e.g., 3x3) through discrete radial profiling, our empirical results on ResNet-18 and CIFAR-10 demonstrate that L2 regularization suppresses high-frequency energy accumulation by over 3x compared to unregularized baselines. Furthermore, we reveal a critical Accuracy-Robustness Trade-off: while L2 models are sensitive to broadband Gaussian noise due to over-specialization in low frequencies, they exhibit superior robustness against high-frequency information loss (e.g., low resolution), outperforming baselines by >6% in blurred scenarios. This work provides a signal-processing perspective on generalization, confirming that regularization enforces a strong spectral inductive bias towards low-frequency structures.