In this paper we analyze and extend the neural network based associative memory proposed by Gripon and Berrou. This associative memory resembles the celebrated Willshaw model with an added partite cluster structure. In the literature, two retrieving schemes have been proposed for the network dynamics, namely sum-of-sum and sum-of-max. They both offer considerably better performance than Willshaw and Hopfield networks, when comparable retrieval scenarios are considered. Former discussions and experiments concentrate on the erasure scenario, where a partial message is used as a probe to the network, in the hope of retrieving the full message. In this regard, sum-of-max outperforms sum-of-sum in terms of retrieval rate by a large margin. However, we observe that when noise and errors are present and the network is queried by a corrupt probe, sum-of-max faces a severe limitation as its stringent activation rule prevents a neuron from reviving back into play once deactivated. In this manuscript, we categorize and analyze different error scenarios so that both the erasure and the corrupt scenarios can be treated consistently. We make an amendment to the network structure to improve the retrieval rate, at the cost of an extra scalar per neuron. Afterwards, five different approaches are proposed to deal with corrupt probes. As a result, we extend the network capability, and also increase the robustness of the retrieving procedure. We then experimentally compare all these proposals and discuss pros and cons of each approach under different types of errors. Simulation results show that if carefully designed, the network is able to preserve both a high retrieval rate and a low running time simultaneously, even when queried by a corrupt probe.
The Gripon-Berrou neural network (GBNN) is a recently invented recurrent neural network embracing a LDPC-like sparse encoding setup which makes it extremely resilient to noise and errors. A natural use of GBNN is as an associative memory. There are two activation rules for the neuron dynamics, namely sum-of-sum and sum-of-max. The latter outperforms the former in terms of retrieval rate by a huge margin. In prior discussions and experiments, it is believed that although sum-of-sum may lead the network to oscillate, sum-of-max always converges to an ensemble of neuron cliques corresponding to previously stored patterns. However, this is not entirely correct. In fact, sum-of-max often converges to bogus fixed points where the ensemble only comprises a small subset of the converged state. By taking advantage of this overlooked fact, we can greatly improve the retrieval rate. We discuss this particular issue and propose a number of heuristics to push sum-of-max beyond these bogus fixed points. To tackle the problem directly and completely, a novel post-processing algorithm is also developed and customized to the structure of GBNN. Experimental results show that the new algorithm achieves a huge performance boost in terms of both retrieval rate and run-time, compared to the standard sum-of-max and all the other heuristics.
Associative memories store content in such a way that the content can be later retrieved by presenting the memory with a small portion of the content, rather than presenting the memory with an address as in more traditional memories. Associative memories are used as building blocks for algorithms within database engines, anomaly detection systems, compression algorithms, and face recognition systems. A classical example of an associative memory is the Hopfield neural network. Recently, Gripon and Berrou have introduced an alternative construction which builds on ideas from the theory of error correcting codes and which greatly outperforms the Hopfield network in capacity, diversity, and efficiency. In this paper we implement a variation of the Gripon-Berrou associative memory on a general purpose graphical processing unit (GPU). The work of Gripon and Berrou proposes two retrieval rules, sum-of-sum and sum-of-max. The sum-of-sum rule uses only matrix-vector multiplication and is easily implemented on the GPU. The sum-of-max rule is much less straightforward to implement because it involves non-linear operations. However, the sum-of-max rule gives significantly better retrieval error rates. We propose a hybrid rule tailored for implementation on a GPU which achieves a 880-fold speedup without sacrificing any accuracy.