



Abstract:Several predictive algorithms are described. Highlighted are variants that make predictions by superposing fields associated to the training data instances. They operate seamlessly with categorical, continuous, and mixed data. Predictive accuracy convergence is also discussed as a criteria for evaluating predictive algorithms. Methods are described on how to adapt algorithms in order to make them achieve predictive accuracy convergence.



Abstract:A technique for improving the prediction accuracy of decision trees is proposed. It consists in evaluating the tree's branches in parallel over multiple paths. The technique enables predictions that are more aligned with the ones generated by the nearest neighborhood variant of the deodata algorithms. The technique also enables the hybridization of the decision tree algorithm with the nearest neighborhood variant.



Abstract:A tie-breaking method is proposed for choosing the predicted class, or outcome, in a decision tree. The method is an adaptation of a similar technique used for deodata predictors.


Abstract:A probabilistic alternative to the Gower distance is proposed. The probabilistic distance enables the realization of a generic deodata predictor.




Abstract:A family of concurrent data predictors is derived from the decision tree classifier by removing the limitation of sequentially evaluating attributes. By evaluating attributes concurrently, the decision tree collapses into a flat structure. Experiments indicate improvements of the prediction accuracy.