Abstract:High-capacity associative memories based on Kernel Logistic Regression (KLR) are known for their exceptional performance but are hindered by high computational costs. This paper investigates the compressibility of KLR-trained Hopfield networks to understand the geometric principles of its robust encoding. We provide a comprehensive geometric theory based on spontaneous symmetry breaking and Walsh analysis, and validate it with compression experiments (quantization and pruning). Our experiments reveal a striking contrast: the network is extremely robust to low-precision quantization but highly sensitive to pruning. Our theory explains this via a ``sparse function, dense representation'' principle, where a sparse input mapping is implemented with a dense, bimodal parameterization. Our findings not only provide a practical path to hardware-efficient kernel memories but also offer new insights into the geometric principles of robust representation in neural systems.
Abstract:Kernel-based learning methods can dramatically increase the storage capacity of Hopfield networks, yet the dynamical mechanism behind this enhancement remains poorly understood. We address this gap by conducting a geometric analysis of the network's energy landscape. We introduce a novel metric, "Pinnacle Sharpness," to quantify the local stability of attractors. By systematically varying the kernel width and storage load, we uncover a rich phase diagram of attractor shapes. Our central finding is the emergence of a "ridge of optimization," where the network maximizes attractor stability under challenging high-load and global-kernel conditions. Through a theoretical decomposition of the landscape gradient into a direct "driving" force and an indirect "feedback" force, we reveal the origin of this phenomenon. The optimization ridge corresponds to a regime of strong anti-correlation between the two forces, where the direct force, amplified by the high storage load, dominates the opposing collective feedback force. This demonstrates a sophisticated self-organization mechanism: the network adaptively harnesses inter-pattern interactions as a cooperative feedback control system to sculpt a robust energy landscape. Our findings provide a new physical picture for the stability of high-capacity associative memories and offer principles for their design.
Abstract:Traditional Hopfield networks, using Hebbian learning, face severe storage capacity limits ($\approx 0.14$ P/N) and spurious attractors. Kernel Logistic Regression (KLR) offers a non-linear approach, mapping patterns to high-dimensional feature spaces for improved separability. Our previous work showed KLR dramatically improves capacity and noise robustness over conventional methods. This paper quantitatively analyzes the attractor structures in KLR-trained networks via extensive simulations. We evaluated recall from diverse initial states across wide storage loads (up to 4.0 P/N) and noise levels. We quantified convergence rates and speed. Our analysis confirms KLR's superior performance: high capacity (up to 4.0 P/N) and robustness. The attractor landscape is remarkably "clean," with near-zero spurious fixed points. Recall failures under high load/noise are primarily due to convergence to other learned patterns, not spurious ones. Dynamics are exceptionally fast (typically 1-2 steps for high-similarity states). This characterization reveals how KLR reshapes dynamics for high-capacity associative memory, highlighting its effectiveness and contributing to AM understanding.




Abstract:Hebbian learning limits Hopfield network capacity. While kernel methods like Kernel Logistic Regression (KLR) improve performance via iterative learning, we propose Kernel Ridge Regression (KRR) as an alternative. KRR learns dual variables non-iteratively via a closed-form solution, offering significant learning speed advantages. We show KRR achieves comparably high storage capacity (reaching ratio 1.5 shown) and noise robustness (recalling from around 80% corrupted patterns) as KLR, while drastically reducing training time, establishing KRR as an efficient method for building high-performance associative memories.
Abstract:Hebbian learning limits Hopfield network storage capacity (pattern-to-neuron ratio around 0.14). We propose Kernel Logistic Regression (KLR) learning. Unlike linear methods, KLR uses kernels to implicitly map patterns to high-dimensional feature space, enhancing separability. By learning dual variables, KLR dramatically improves storage capacity, achieving perfect recall even when pattern numbers exceed neuron numbers (up to ratio 1.5 shown), and enhances noise robustness. KLR demonstrably outperforms Hebbian and linear logistic regression approaches.
Abstract:This paper proposes an extension of Random Projection Depth (RPD) to cope with multiple modalities and non-convexity on data clouds. In the framework of the proposed method, the RPD is computed in a reproducing kernel Hilbert space. With the help of kernel principal component analysis, we expect that the proposed method can cope with the above multiple modalities and non-convexity. The experimental results demonstrate that the proposed method outperforms RPD and is comparable to other existing detection models on benchmark datasets regarding Area Under the Curves (AUCs) of Receiver Operating Characteristic (ROC).