Picture for Guangrun Wang

Guangrun Wang

MixReorg: Cross-Modal Mixed Patch Reorganization is a Good Mask Learner for Open-World Semantic Segmentation

Add code
Aug 09, 2023
Figure 1 for MixReorg: Cross-Modal Mixed Patch Reorganization is a Good Mask Learner for Open-World Semantic Segmentation
Figure 2 for MixReorg: Cross-Modal Mixed Patch Reorganization is a Good Mask Learner for Open-World Semantic Segmentation
Figure 3 for MixReorg: Cross-Modal Mixed Patch Reorganization is a Good Mask Learner for Open-World Semantic Segmentation
Figure 4 for MixReorg: Cross-Modal Mixed Patch Reorganization is a Good Mask Learner for Open-World Semantic Segmentation
Viaarxiv icon

Language-free Compositional Action Generation via Decoupling Refinement

Add code
Jul 07, 2023
Viaarxiv icon

LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields

Add code
Apr 20, 2023
Viaarxiv icon

Traditional Classification Neural Networks are Good Generators: They are Competitive with DDPMs and GANs

Add code
Dec 08, 2022
Viaarxiv icon

Structure-Preserving 3D Garment Modeling with Neural Sewing Machines

Add code
Nov 12, 2022
Figure 1 for Structure-Preserving 3D Garment Modeling with Neural Sewing Machines
Figure 2 for Structure-Preserving 3D Garment Modeling with Neural Sewing Machines
Figure 3 for Structure-Preserving 3D Garment Modeling with Neural Sewing Machines
Figure 4 for Structure-Preserving 3D Garment Modeling with Neural Sewing Machines
Viaarxiv icon

Learning Self-Regularized Adversarial Views for Self-Supervised Vision Transformers

Add code
Oct 16, 2022
Figure 1 for Learning Self-Regularized Adversarial Views for Self-Supervised Vision Transformers
Figure 2 for Learning Self-Regularized Adversarial Views for Self-Supervised Vision Transformers
Figure 3 for Learning Self-Regularized Adversarial Views for Self-Supervised Vision Transformers
Figure 4 for Learning Self-Regularized Adversarial Views for Self-Supervised Vision Transformers
Viaarxiv icon

Understanding Weight Similarity of Neural Networks via Chain Normalization Rule and Hypothesis-Training-Testing

Add code
Aug 08, 2022
Figure 1 for Understanding Weight Similarity of Neural Networks via Chain Normalization Rule and Hypothesis-Training-Testing
Figure 2 for Understanding Weight Similarity of Neural Networks via Chain Normalization Rule and Hypothesis-Training-Testing
Figure 3 for Understanding Weight Similarity of Neural Networks via Chain Normalization Rule and Hypothesis-Training-Testing
Figure 4 for Understanding Weight Similarity of Neural Networks via Chain Normalization Rule and Hypothesis-Training-Testing
Viaarxiv icon

Beyond Fixation: Dynamic Window Visual Transformer

Add code
Apr 08, 2022
Figure 1 for Beyond Fixation: Dynamic Window Visual Transformer
Figure 2 for Beyond Fixation: Dynamic Window Visual Transformer
Figure 3 for Beyond Fixation: Dynamic Window Visual Transformer
Figure 4 for Beyond Fixation: Dynamic Window Visual Transformer
Viaarxiv icon

Automated Progressive Learning for Efficient Training of Vision Transformers

Add code
Mar 28, 2022
Figure 1 for Automated Progressive Learning for Efficient Training of Vision Transformers
Figure 2 for Automated Progressive Learning for Efficient Training of Vision Transformers
Figure 3 for Automated Progressive Learning for Efficient Training of Vision Transformers
Figure 4 for Automated Progressive Learning for Efficient Training of Vision Transformers
Viaarxiv icon

DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and Transformers

Add code
Sep 21, 2021
Figure 1 for DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and Transformers
Figure 2 for DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and Transformers
Figure 3 for DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and Transformers
Figure 4 for DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and Transformers
Viaarxiv icon