Abstract:The educability model is a computational model that has been recently proposed to describe the cognitive capability that makes humans unique among existing biological species on Earth in being able to create advanced civilizations. Educability is defined as a capability for acquiring and applying knowledge. It is intended both to describe human capabilities and, equally, as an aspirational description of what can be usefully realized by machines. While the intention is to have a mathematically well-defined computational model, in constructing an instance of the model there are a number of decisions to make. We call these decisions {\it parameters}. In a standard computer, two parameters are the memory capacity and clock rate. There is no universally optimal choice for either one, or even for their ratio. Similarly, in a standard machine learning system, two parameters are the learning algorithm and the dataset used for training. Again, there are no universally optimal choices known for either. An educable system has many more parameters than either of these two kinds of system. This short paper discusses some of the main parameters of educable systems, and the broader implications of their existence.
Abstract:We consider the question of the stability of evolutionary algorithms to gradual changes, or drift, in the target concept. We define an algorithm to be resistant to drift if, for some inverse polynomial drift rate in the target function, it converges to accuracy 1 -- \epsilon , with polynomial resources, and then stays within that accuracy indefinitely, except with probability \epsilon , at any one time. We show that every evolution algorithm, in the sense of Valiant (2007; 2009), can be converted using the Correlational Query technique of Feldman (2008), into such a drift resistant algorithm. For certain evolutionary algorithms, such as for Boolean conjunctions, we give bounds on the rates of drift that they can resist. We develop some new evolution algorithms that are resistant to significant drift. In particular, we give an algorithm for evolving linear separators over the spherically symmetric distribution that is resistant to a drift rate of O(\epsilon /n), and another algorithm over the more general product normal distributions that resists a smaller drift rate. The above translation result can be also interpreted as one on the robustness of the notion of evolvability itself under changes of definition. As a second result in that direction we show that every evolution algorithm can be converted to a quasi-monotonic one that can evolve from any starting point without the performance ever dipping significantly below that of the starting point. This permits the somewhat unnatural feature of arbitrary performance degradations to be removed from several known robustness translations.