Picture for Jiashi Feng

Jiashi Feng

NUS

Body Meshes as Points

Add code
May 06, 2021
Figure 1 for Body Meshes as Points
Figure 2 for Body Meshes as Points
Figure 3 for Body Meshes as Points
Figure 4 for Body Meshes as Points
Viaarxiv icon

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation

Add code
May 06, 2021
Figure 1 for PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation
Figure 2 for PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation
Figure 3 for PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation
Figure 4 for PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation
Viaarxiv icon

How Well Self-Supervised Pre-Training Performs with Streaming Data?

Add code
Apr 25, 2021
Figure 1 for How Well Self-Supervised Pre-Training Performs with Streaming Data?
Figure 2 for How Well Self-Supervised Pre-Training Performs with Streaming Data?
Figure 3 for How Well Self-Supervised Pre-Training Performs with Streaming Data?
Figure 4 for How Well Self-Supervised Pre-Training Performs with Streaming Data?
Viaarxiv icon

Token Labeling: Training a 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet

Add code
Apr 23, 2021
Figure 1 for Token Labeling: Training a 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet
Figure 2 for Token Labeling: Training a 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet
Figure 3 for Token Labeling: Training a 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet
Figure 4 for Token Labeling: Training a 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet
Viaarxiv icon

DeepViT: Towards Deeper Vision Transformer

Add code
Apr 19, 2021
Figure 1 for DeepViT: Towards Deeper Vision Transformer
Figure 2 for DeepViT: Towards Deeper Vision Transformer
Figure 3 for DeepViT: Towards Deeper Vision Transformer
Figure 4 for DeepViT: Towards Deeper Vision Transformer
Viaarxiv icon

Distill and Fine-tune: Effective Adaptation from a Black-box Source Model

Add code
Apr 04, 2021
Figure 1 for Distill and Fine-tune: Effective Adaptation from a Black-box Source Model
Figure 2 for Distill and Fine-tune: Effective Adaptation from a Black-box Source Model
Figure 3 for Distill and Fine-tune: Effective Adaptation from a Black-box Source Model
Figure 4 for Distill and Fine-tune: Effective Adaptation from a Black-box Source Model
Viaarxiv icon

Augmented Transformer with Adaptive Graph for Temporal Action Proposal Generation

Add code
Mar 30, 2021
Figure 1 for Augmented Transformer with Adaptive Graph for Temporal Action Proposal Generation
Figure 2 for Augmented Transformer with Adaptive Graph for Temporal Action Proposal Generation
Figure 3 for Augmented Transformer with Adaptive Graph for Temporal Action Proposal Generation
Figure 4 for Augmented Transformer with Adaptive Graph for Temporal Action Proposal Generation
Viaarxiv icon

AutoSpace: Neural Architecture Search with Less Human Interference

Add code
Mar 22, 2021
Figure 1 for AutoSpace: Neural Architecture Search with Less Human Interference
Figure 2 for AutoSpace: Neural Architecture Search with Less Human Interference
Figure 3 for AutoSpace: Neural Architecture Search with Less Human Interference
Figure 4 for AutoSpace: Neural Architecture Search with Less Human Interference
Viaarxiv icon

Coordinate Attention for Efficient Mobile Network Design

Add code
Mar 04, 2021
Figure 1 for Coordinate Attention for Efficient Mobile Network Design
Figure 2 for Coordinate Attention for Efficient Mobile Network Design
Figure 3 for Coordinate Attention for Efficient Mobile Network Design
Figure 4 for Coordinate Attention for Efficient Mobile Network Design
Viaarxiv icon

Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning

Add code
Feb 12, 2021
Figure 1 for Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning
Figure 2 for Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning
Figure 3 for Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning
Figure 4 for Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning
Viaarxiv icon