Biological organisms must learn how to control their own bodies to achieve deliberate locomotion, that is, predict their next body position based on their current position and selected action. Such learning is goal-agnostic with respect to maximizing (minimizing) an environmental reward (penalty) signal. A cognitive map learner (CML) is a collection of three separate yet collaboratively trained artificial neural networks which learn to construct representations for the node states and edge actions of an arbitrary bidirectional graph. In so doing, a CML learns how to traverse the graph nodes; however, the CML does not learn when and why to move from one node state to another. This work created CMLs with node states expressed as high dimensional vectors suitable for hyperdimensional computing (HDC), a form of symbolic machine learning (ML). In so doing, graph knowledge (CML) was segregated from target node selection (HDC), allowing each ML approach to be trained independently. The first approach used HDC to engineer an arbitrary number of hierarchical CMLs, where each graph node state specified target node states for the next lower level CMLs to traverse to. Second, an HDC-based stimulus-response experience model was demonstrated per CML. Because hypervectors may be in superposition with each other, multiple experience models were added together and run in parallel without any retraining. Lastly, a CML-HDC ML unit was modularized: trained with proxy symbols such that arbitrary, application-specific stimulus symbols could be operated upon without retraining either CML or HDC model. These methods provide a template for engineering heterogenous ML systems.
This paper presents and demonstrates a stochastic logic time delay reservoir design in FPGA hardware. The reservoir network approach is analyzed using a number of metrics, such as kernel quality, generalization rank, performance on simple benchmarks, and is also compared to a deterministic design. A novel re-seeding method is introduced to reduce the adverse effects of stochastic noise, which may also be implemented in other stochastic logic reservoir computing designs, such as echo state networks. Benchmark results indicate that the proposed design performs well on noise-tolerant classification problems, but more work needs to be done to improve the stochastic logic time delay reservoirs robustness for regression problems. In addition, we show that the stochastic design can significantly reduce area cost if the conversion between binary and stochastic representations implemented efficiently.
A framework for implementing reservoir computing (RC) and extreme learning machines (ELMs), two types of artificial neural networks, based on 1D elementary Cellular Automata (CA) is presented, in which two separate CA rules explicitly implement the minimum computational requirements of the reservoir layer: hyperdimensional projection and short-term memory. CAs are cell-based state machines, which evolve in time in accordance with local rules based on a cells current state and those of its neighbors. Notably, simple single cell shift rules as the memory rule in a fixed edge CA afforded reasonable success in conjunction with a variety of projection rules, potentially significantly reducing the optimal solution search space. Optimal iteration counts for the CA rule pairs can be estimated for some tasks based upon the category of the projection rule. Initial results support future hardware realization, where CAs potentially afford orders of magnitude reduction in size, weight, and power (SWaP) requirements compared with floating point RC implementations.