Abstract:Conformal predictive systems are sets of predictive distributions with theoretical out-of-sample calibration guarantees. The calibration guarantees are typically that the set of predictions contains a forecast distribution whose prediction intervals exhibit the correct marginal coverage at all levels. Conformal predictive systems are constructed using conformity measures that quantify how well possible outcomes conform with historical data. However, alternative methods have been proposed to construct predictive systems with more appealing theoretical properties. We study an approach to construct predictive systems that we term Residual Distribution Predictive Systems. In the split conformal setting, this approach nests conformal predictive systems with a popular class of conformity measures, providing an alternative perspective on the classical approach. In the full conformal setting, the two approaches differ, and the new approach has the advantage that it does not rely on a conformity measure satisfying fairly stringent requirements to ensure that the predictive system is well-defined; it can readily be implemented alongside any point-valued regression method to yield predictive systems with out-of-sample calibration guarantees. The empirical performance of this approach is assessed using simulated data, where it is found to perform competitively with conformal predictive systems. However, the new approach offers considerable scope for implementation with alternative regression methods.
Abstract:Proper scoring rules have been a subject of growing interest in recent years, not only as tools for evaluation of probabilistic forecasts but also as methods for estimating probability distributions. In this article, we review the mathematical foundations of proper scoring rules including general characterization results and important families of scoring rules. We discuss their role in statistics and machine learning for estimation and forecast evaluation. Furthermore, we comment on interesting developments of their usage in applications.
Abstract:Probabilistic predictions are probability distributions over the set of possible outcomes. Such predictions quantify the uncertainty in the outcome, making them essential for effective decision making. By combining multiple predictions, the information sources used to generate the predictions are pooled, often resulting in a more informative forecast. Probabilistic predictions are typically combined by linearly pooling the individual predictive distributions; this encompasses several ensemble learning techniques, for example. The weights assigned to each prediction can be estimated based on their past performance, allowing more accurate predictions to receive a higher weight. This can be achieved by finding the weights that optimise a proper scoring rule over some training data. By embedding predictions into a Reproducing Kernel Hilbert Space (RKHS), we illustrate that estimating the linear pool weights that optimise kernel-based scoring rules is a convex quadratic optimisation problem. This permits an efficient implementation of the linear pool when optimally combining predictions on arbitrary outcome domains. This result also holds for other combination strategies, and we additionally study a flexible generalisation of the linear pool that overcomes some of its theoretical limitations, whilst allowing an efficient implementation within the RKHS framework. These approaches are compared in an application to operational wind speed forecasts, where this generalisation is found to offer substantial improvements upon the traditional linear pool.
Abstract:Insurance pricing systems should fulfill the auto-calibration property to ensure that there is no systematic cross-financing between different price cohorts. Often, regression models are not auto-calibrated. We propose to apply isotonic recalibration to a given regression model to ensure auto-calibration. Our main result proves that under a low signal-to-noise ratio, this isotonic recalibration step leads to explainable pricing systems because the resulting isotonically recalibrated regression functions have a low complexity.
Abstract:We present new classes of positive definite kernels on non-standard spaces that are integrally strictly positive definite or characteristic. In particular, we discuss radial kernels on separable Hilbert spaces, and introduce broad classes of kernels on Banach spaces and on metric spaces of strong negative type. The general results are used to give explicit classes of kernels on separable $L^p$ spaces and on sets of measures.