Picture for Xiaoliang Dai

Xiaoliang Dai

Trainable Projected Gradient Method for Robust Fine-tuning

Add code
Mar 28, 2023
Viaarxiv icon

Mask3D: Pre-training 2D Vision Transformers by Learning Masked 3D Priors

Add code
Feb 28, 2023
Figure 1 for Mask3D: Pre-training 2D Vision Transformers by Learning Masked 3D Priors
Figure 2 for Mask3D: Pre-training 2D Vision Transformers by Learning Masked 3D Priors
Figure 3 for Mask3D: Pre-training 2D Vision Transformers by Learning Masked 3D Priors
Figure 4 for Mask3D: Pre-training 2D Vision Transformers by Learning Masked 3D Priors
Viaarxiv icon

Pruning Compact ConvNets for Efficient Inference

Add code
Jan 11, 2023
Viaarxiv icon

Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference

Add code
Nov 18, 2022
Figure 1 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Figure 2 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Figure 3 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Figure 4 for Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference
Viaarxiv icon

3D-Aware Encoding for Style-based Neural Radiance Fields

Add code
Nov 12, 2022
Figure 1 for 3D-Aware Encoding for Style-based Neural Radiance Fields
Figure 2 for 3D-Aware Encoding for Style-based Neural Radiance Fields
Figure 3 for 3D-Aware Encoding for Style-based Neural Radiance Fields
Figure 4 for 3D-Aware Encoding for Style-based Neural Radiance Fields
Viaarxiv icon

Token Merging: Your ViT But Faster

Add code
Oct 17, 2022
Figure 1 for Token Merging: Your ViT But Faster
Figure 2 for Token Merging: Your ViT But Faster
Figure 3 for Token Merging: Your ViT But Faster
Figure 4 for Token Merging: Your ViT But Faster
Viaarxiv icon

Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP

Add code
Oct 09, 2022
Figure 1 for Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP
Figure 2 for Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP
Figure 3 for Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP
Figure 4 for Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP
Viaarxiv icon

Hydra Attention: Efficient Attention with Many Heads

Add code
Sep 15, 2022
Figure 1 for Hydra Attention: Efficient Attention with Many Heads
Figure 2 for Hydra Attention: Efficient Attention with Many Heads
Figure 3 for Hydra Attention: Efficient Attention with Many Heads
Figure 4 for Hydra Attention: Efficient Attention with Many Heads
Viaarxiv icon

Open-Set Semi-Supervised Object Detection

Add code
Aug 29, 2022
Figure 1 for Open-Set Semi-Supervised Object Detection
Figure 2 for Open-Set Semi-Supervised Object Detection
Figure 3 for Open-Set Semi-Supervised Object Detection
Figure 4 for Open-Set Semi-Supervised Object Detection
Viaarxiv icon

NASRec: Weight Sharing Neural Architecture Search for Recommender Systems

Add code
Jul 14, 2022
Figure 1 for NASRec: Weight Sharing Neural Architecture Search for Recommender Systems
Figure 2 for NASRec: Weight Sharing Neural Architecture Search for Recommender Systems
Figure 3 for NASRec: Weight Sharing Neural Architecture Search for Recommender Systems
Figure 4 for NASRec: Weight Sharing Neural Architecture Search for Recommender Systems
Viaarxiv icon