Alert button
Picture for Michael Horrell

Michael Horrell

Alert button

Wide Boosting

Jul 20, 2020
Michael Horrell

Figure 1 for Wide Boosting
Figure 2 for Wide Boosting
Figure 3 for Wide Boosting

Gradient boosting (GB) is a popular methodology used to solve prediction problems through minimization of a differentiable loss function, $L$. GB is especially performant in low and medium dimensional problems. This paper presents a simple adjustment to GB motivated in part by artificial neural networks. Specifically, our adjustment inserts a square or rectangular matrix multiplication between the output of a GB model and the loss, $L$. This allows the output of a GB model to have increased dimension prior to being fed into the loss and is thus "wider" than standard GB implementations. We provide performance comparisons on several publicly available datasets. Wide Boosting outperforms standard GB in every dataset we try.

* Gradient Boosting, Wide Neural Networks 
Viaarxiv icon

Nonparametric Reduced Rank Regression

Jan 09, 2013
Rina Foygel, Michael Horrell, Mathias Drton, John Lafferty

Figure 1 for Nonparametric Reduced Rank Regression
Figure 2 for Nonparametric Reduced Rank Regression

We propose an approach to multivariate nonparametric regression that generalizes reduced rank regression for linear models. An additive model is estimated for each dimension of a $q$-dimensional response, with a shared $p$-dimensional predictor variable. To control the complexity of the model, we employ a functional form of the Ky-Fan or nuclear norm, resulting in a set of function estimates that have low rank. Backfitting algorithms are derived and justified using a nonparametric form of the nuclear norm subdifferential. Oracle inequalities on excess risk are derived that exhibit the scaling behavior of the procedure in the high dimensional setting. The methods are illustrated on gene expression data.

Viaarxiv icon