Picture for Zhangyang Wang

Zhangyang Wang

Atlas

Symbolic Learning to Optimize: Towards Interpretability and Scalability

Add code
Apr 02, 2022
Figure 1 for Symbolic Learning to Optimize: Towards Interpretability and Scalability
Figure 2 for Symbolic Learning to Optimize: Towards Interpretability and Scalability
Figure 3 for Symbolic Learning to Optimize: Towards Interpretability and Scalability
Figure 4 for Symbolic Learning to Optimize: Towards Interpretability and Scalability
Viaarxiv icon

SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image

Add code
Apr 02, 2022
Figure 1 for SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image
Figure 2 for SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image
Figure 3 for SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image
Figure 4 for SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image
Viaarxiv icon

VFDS: Variational Foresight Dynamic Selection in Bayesian Neural Networks for Efficient Human Activity Recognition

Add code
Mar 31, 2022
Figure 1 for VFDS: Variational Foresight Dynamic Selection in Bayesian Neural Networks for Efficient Human Activity Recognition
Figure 2 for VFDS: Variational Foresight Dynamic Selection in Bayesian Neural Networks for Efficient Human Activity Recognition
Figure 3 for VFDS: Variational Foresight Dynamic Selection in Bayesian Neural Networks for Efficient Human Activity Recognition
Figure 4 for VFDS: Variational Foresight Dynamic Selection in Bayesian Neural Networks for Efficient Human Activity Recognition
Viaarxiv icon

Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization

Add code
Mar 18, 2022
Figure 1 for Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization
Figure 2 for Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization
Figure 3 for Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization
Figure 4 for Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization
Viaarxiv icon

Unified Visual Transformer Compression

Add code
Mar 15, 2022
Figure 1 for Unified Visual Transformer Compression
Figure 2 for Unified Visual Transformer Compression
Figure 3 for Unified Visual Transformer Compression
Figure 4 for Unified Visual Transformer Compression
Viaarxiv icon

Optimizer Amalgamation

Add code
Mar 15, 2022
Figure 1 for Optimizer Amalgamation
Figure 2 for Optimizer Amalgamation
Figure 3 for Optimizer Amalgamation
Figure 4 for Optimizer Amalgamation
Viaarxiv icon

The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy

Add code
Mar 12, 2022
Figure 1 for The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy
Figure 2 for The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy
Figure 3 for The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy
Figure 4 for The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy
Viaarxiv icon

Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice

Add code
Mar 09, 2022
Figure 1 for Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice
Figure 2 for Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice
Figure 3 for Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice
Figure 4 for Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice
Viaarxiv icon

Auto-scaling Vision Transformers without Training

Add code
Feb 27, 2022
Figure 1 for Auto-scaling Vision Transformers without Training
Figure 2 for Auto-scaling Vision Transformers without Training
Figure 3 for Auto-scaling Vision Transformers without Training
Figure 4 for Auto-scaling Vision Transformers without Training
Viaarxiv icon

Sparsity Winning Twice: Better Robust Generalization from More Efficient Training

Add code
Feb 27, 2022
Figure 1 for Sparsity Winning Twice: Better Robust Generalization from More Efficient Training
Figure 2 for Sparsity Winning Twice: Better Robust Generalization from More Efficient Training
Figure 3 for Sparsity Winning Twice: Better Robust Generalization from More Efficient Training
Figure 4 for Sparsity Winning Twice: Better Robust Generalization from More Efficient Training
Viaarxiv icon