Abstract:Regression neural networks (NNs) are most commonly trained by minimizing the mean squared prediction error, which is highly sensitive to outliers and data contamination. Existing robust training methods for regression NNs are often limited in scope and rely primarily on empirical validation, with only a few offering partial theoretical guarantees. In this paper, we propose a new robust learning framework for regression NNs based on the $β$-divergence (also known as the density power divergence) which we call `rRNet'. It applies to a broad class of regression NNs, including models with non-smooth activation functions and error densities, and recovers the classical maximum likelihood learning as a special case. The rRNet is implemented via an alternating optimization scheme, for which we establish convergence guarantees to stationary points under mild, verifiable conditions. The (local) robustness of rRNet is theoretically characterized through the influence functions of both the parameter estimates and the resulting rRNet predictor, which are shown to be bounded for suitable choices of the tuning parameter $β$, depending on the error density. We further prove that rRNet attains the optimal 50\% asymptotic breakdown point at the assumed model for all $β\in(0, 1]$, providing a strong global robustness guarantee that is largely absent for existing NN learning methods. Our theoretical results are complemented by simulation experiments and real-data analyses, illustrating practical advantages of rRNet over existing approaches in both function approximation problems and prediction tasks with noisy observations.




Abstract:The minimum density power divergence estimator (MDPDE) has gained significant attention in the literature of robust inference due to its strong robustness properties and high asymptotic efficiency; it is relatively easy to compute and can be interpreted as a generalization of the classical maximum likelihood estimator. It has been successfully applied in various setups, including the case of independent and non-homogeneous (INH) observations that cover both classification and regression-type problems with a fixed design. While the local robustness of this estimator has been theoretically validated through the bounded influence function, no general result is known about the global reliability or the breakdown behavior of this estimator under the INH setup, except for the specific case of location-type models. In this paper, we extend the notion of asymptotic breakdown point from the case of independent and identically distributed data to the INH setup and derive a theoretical lower bound for the asymptotic breakdown point of the MDPDE, under some easily verifiable assumptions. These results are further illustrated with applications to some fixed design regression models and corroborated through extensive simulation studies.