Picture for Zhouchen Lin

Zhouchen Lin

Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation Overlap

Add code
Mar 25, 2022
Figure 1 for Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation Overlap
Figure 2 for Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation Overlap
Figure 3 for Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation Overlap
Figure 4 for Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation Overlap
Viaarxiv icon

A Unified Contrastive Energy-based Model for Understanding the Generative Ability of Adversarial Training

Add code
Mar 25, 2022
Figure 1 for A Unified Contrastive Energy-based Model for Understanding the Generative Ability of Adversarial Training
Figure 2 for A Unified Contrastive Energy-based Model for Understanding the Generative Ability of Adversarial Training
Figure 3 for A Unified Contrastive Energy-based Model for Understanding the Generative Ability of Adversarial Training
Figure 4 for A Unified Contrastive Energy-based Model for Understanding the Generative Ability of Adversarial Training
Viaarxiv icon

Do We Really Need a Learnable Classifier at the End of Deep Neural Network?

Add code
Mar 17, 2022
Figure 1 for Do We Really Need a Learnable Classifier at the End of Deep Neural Network?
Figure 2 for Do We Really Need a Learnable Classifier at the End of Deep Neural Network?
Figure 3 for Do We Really Need a Learnable Classifier at the End of Deep Neural Network?
Figure 4 for Do We Really Need a Learnable Classifier at the End of Deep Neural Network?
Viaarxiv icon

Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the $O(ε^{-7/4})$ Complexity

Add code
Feb 16, 2022
Figure 1 for Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the $O(ε^{-7/4})$ Complexity
Figure 2 for Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the $O(ε^{-7/4})$ Complexity
Viaarxiv icon

On Training Implicit Models

Add code
Nov 24, 2021
Figure 1 for On Training Implicit Models
Figure 2 for On Training Implicit Models
Figure 3 for On Training Implicit Models
Figure 4 for On Training Implicit Models
Viaarxiv icon

Pareto Adversarial Robustness: Balancing Spatial Robustness and Sensitivity-based Robustness

Add code
Nov 03, 2021
Figure 1 for Pareto Adversarial Robustness: Balancing Spatial Robustness and Sensitivity-based Robustness
Figure 2 for Pareto Adversarial Robustness: Balancing Spatial Robustness and Sensitivity-based Robustness
Figure 3 for Pareto Adversarial Robustness: Balancing Spatial Robustness and Sensitivity-based Robustness
Figure 4 for Pareto Adversarial Robustness: Balancing Spatial Robustness and Sensitivity-based Robustness
Viaarxiv icon

Residual Relaxation for Multi-view Representation Learning

Add code
Oct 28, 2021
Figure 1 for Residual Relaxation for Multi-view Representation Learning
Figure 2 for Residual Relaxation for Multi-view Representation Learning
Figure 3 for Residual Relaxation for Multi-view Representation Learning
Figure 4 for Residual Relaxation for Multi-view Representation Learning
Viaarxiv icon

Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State

Add code
Sep 29, 2021
Figure 1 for Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State
Figure 2 for Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State
Figure 3 for Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State
Figure 4 for Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State
Viaarxiv icon

Is Attention Better Than Matrix Decomposition?

Add code
Sep 09, 2021
Figure 1 for Is Attention Better Than Matrix Decomposition?
Figure 2 for Is Attention Better Than Matrix Decomposition?
Figure 3 for Is Attention Better Than Matrix Decomposition?
Figure 4 for Is Attention Better Than Matrix Decomposition?
Viaarxiv icon

Under-bagging Nearest Neighbors for Imbalanced Classification

Add code
Sep 01, 2021
Figure 1 for Under-bagging Nearest Neighbors for Imbalanced Classification
Figure 2 for Under-bagging Nearest Neighbors for Imbalanced Classification
Figure 3 for Under-bagging Nearest Neighbors for Imbalanced Classification
Figure 4 for Under-bagging Nearest Neighbors for Imbalanced Classification
Viaarxiv icon