Alert button
Picture for Ziyi Liang

Ziyi Liang

Alert button

Conformal inference is (almost) free for neural networks trained with early stopping

Jan 27, 2023
Ziyi Liang, Yanfei Zhou, Matteo Sesia

Figure 1 for Conformal inference is (almost) free for neural networks trained with early stopping
Figure 2 for Conformal inference is (almost) free for neural networks trained with early stopping
Figure 3 for Conformal inference is (almost) free for neural networks trained with early stopping
Figure 4 for Conformal inference is (almost) free for neural networks trained with early stopping

Early stopping based on hold-out data is a popular regularization technique designed to mitigate overfitting and increase the predictive accuracy of neural networks. Models trained with early stopping often provide relatively accurate predictions, but they generally still lack precise statistical guarantees unless they are further calibrated using independent hold-out data. This paper addresses the above limitation with conformalized early stopping: a novel method that combines early stopping with conformal calibration while efficiently recycling the same hold-out data. This leads to models that are both accurate and able to provide exact predictive inferences without multiple data splits nor overly conservative adjustments. Practical implementations are developed for different learning tasks -- outlier detection, multi-class classification, regression -- and their competitive performance is demonstrated on real data.

Viaarxiv icon

Integrative conformal p-values for powerful out-of-distribution testing with labeled outliers

Aug 23, 2022
Ziyi Liang, Matteo Sesia, Wenguang Sun

Figure 1 for Integrative conformal p-values for powerful out-of-distribution testing with labeled outliers
Figure 2 for Integrative conformal p-values for powerful out-of-distribution testing with labeled outliers
Figure 3 for Integrative conformal p-values for powerful out-of-distribution testing with labeled outliers
Figure 4 for Integrative conformal p-values for powerful out-of-distribution testing with labeled outliers

This paper develops novel conformal methods to test whether a new observation was sampled from the same distribution as a reference set. Blending inductive and transductive conformal inference in an innovative way, the described methods can re-weight standard conformal p-values based on dependent side information from known out-of-distribution data in a principled way, and can automatically take advantage of the most powerful model from any collection of one-class and binary classifiers. The solution can be implemented either through sample splitting or via a novel transductive cross-validation+ scheme which may also be useful in other applications of conformal inference, due to tighter guarantees compared to existing cross-validation approaches. After studying false discovery rate control and power within a multiple testing framework with several possible outliers, the proposed solution is shown to outperform standard conformal p-values through simulations as well as applications to image recognition and tabular data.

Viaarxiv icon

Locally Adaptive Transfer Learning Algorithms for Large-Scale Multiple Testing

Mar 25, 2022
Ziyi Liang, T. Tony Cai, Wenguang Sun, Yin Xia

Figure 1 for Locally Adaptive Transfer Learning Algorithms for Large-Scale Multiple Testing
Figure 2 for Locally Adaptive Transfer Learning Algorithms for Large-Scale Multiple Testing
Figure 3 for Locally Adaptive Transfer Learning Algorithms for Large-Scale Multiple Testing
Figure 4 for Locally Adaptive Transfer Learning Algorithms for Large-Scale Multiple Testing

Transfer learning has enjoyed increasing popularity in a range of big data applications. In the context of large-scale multiple testing, the goal is to extract and transfer knowledge learned from related source domains to improve the accuracy of simultaneously testing of a large number of hypotheses in the target domain. This article develops a locally adaptive transfer learning algorithm (LATLA) for transfer learning for multiple testing. In contrast with existing covariate-assisted multiple testing methods that require the auxiliary covariates to be collected alongside the primary data on the same testing units, LATLA provides a principled and generic transfer learning framework that is capable of incorporating multiple samples of auxiliary data from related source domains, possibly in different dimensions/structures and from diverse populations. Both the theoretical and numerical results show that LATLA controls the false discovery rate and outperforms existing methods in power. LATLA is illustrated through an application to genome-wide association studies for the identification of disease-associated SNPs by cross-utilizing the auxiliary data from a related linkage analysis.

* 26 pages, 6 figures 
Viaarxiv icon