Picture for Yingyi Chen

Yingyi Chen

Learning in Feature Spaces via Coupled Covariances: Asymmetric Kernel SVD and Nyström method

Add code
Jun 13, 2024
Viaarxiv icon

SURE: SUrvey REcipes for building reliable and robust deep networks

Add code
Mar 01, 2024
Figure 1 for SURE: SUrvey REcipes for building reliable and robust deep networks
Figure 2 for SURE: SUrvey REcipes for building reliable and robust deep networks
Figure 3 for SURE: SUrvey REcipes for building reliable and robust deep networks
Figure 4 for SURE: SUrvey REcipes for building reliable and robust deep networks
Viaarxiv icon

Self-Attention through Kernel-Eigen Pair Sparse Variational Gaussian Processes

Add code
Feb 02, 2024
Figure 1 for Self-Attention through Kernel-Eigen Pair Sparse Variational Gaussian Processes
Figure 2 for Self-Attention through Kernel-Eigen Pair Sparse Variational Gaussian Processes
Figure 3 for Self-Attention through Kernel-Eigen Pair Sparse Variational Gaussian Processes
Figure 4 for Self-Attention through Kernel-Eigen Pair Sparse Variational Gaussian Processes
Viaarxiv icon

Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation

Add code
May 31, 2023
Figure 1 for Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation
Figure 2 for Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation
Figure 3 for Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation
Figure 4 for Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation
Viaarxiv icon

Jigsaw-ViT: Learning Jigsaw Puzzles in Vision Transformer

Add code
Jul 25, 2022
Figure 1 for Jigsaw-ViT: Learning Jigsaw Puzzles in Vision Transformer
Figure 2 for Jigsaw-ViT: Learning Jigsaw Puzzles in Vision Transformer
Figure 3 for Jigsaw-ViT: Learning Jigsaw Puzzles in Vision Transformer
Figure 4 for Jigsaw-ViT: Learning Jigsaw Puzzles in Vision Transformer
Viaarxiv icon

Compressing Features for Learning with Noisy Labels

Add code
Jun 27, 2022
Figure 1 for Compressing Features for Learning with Noisy Labels
Figure 2 for Compressing Features for Learning with Noisy Labels
Figure 3 for Compressing Features for Learning with Noisy Labels
Figure 4 for Compressing Features for Learning with Noisy Labels
Viaarxiv icon

Boosting Co-teaching with Compression Regularization for Label Noise

Add code
Apr 28, 2021
Figure 1 for Boosting Co-teaching with Compression Regularization for Label Noise
Figure 2 for Boosting Co-teaching with Compression Regularization for Label Noise
Figure 3 for Boosting Co-teaching with Compression Regularization for Label Noise
Figure 4 for Boosting Co-teaching with Compression Regularization for Label Noise
Viaarxiv icon

Generalizing Random Fourier Features via Generalized Measures

Add code
May 30, 2020
Figure 1 for Generalizing Random Fourier Features via Generalized Measures
Figure 2 for Generalizing Random Fourier Features via Generalized Measures
Figure 3 for Generalizing Random Fourier Features via Generalized Measures
Figure 4 for Generalizing Random Fourier Features via Generalized Measures
Viaarxiv icon

Two-stage Best-scored Random Forest for Large-scale Regression

Add code
May 09, 2019
Figure 1 for Two-stage Best-scored Random Forest for Large-scale Regression
Figure 2 for Two-stage Best-scored Random Forest for Large-scale Regression
Figure 3 for Two-stage Best-scored Random Forest for Large-scale Regression
Figure 4 for Two-stage Best-scored Random Forest for Large-scale Regression
Viaarxiv icon

DSTP-RNN: a dual-stage two-phase attention-based recurrent neural networks for long-term and multivariate time series prediction

Add code
Apr 16, 2019
Figure 1 for DSTP-RNN: a dual-stage two-phase attention-based recurrent neural networks for long-term and multivariate time series prediction
Figure 2 for DSTP-RNN: a dual-stage two-phase attention-based recurrent neural networks for long-term and multivariate time series prediction
Figure 3 for DSTP-RNN: a dual-stage two-phase attention-based recurrent neural networks for long-term and multivariate time series prediction
Figure 4 for DSTP-RNN: a dual-stage two-phase attention-based recurrent neural networks for long-term and multivariate time series prediction
Viaarxiv icon