Alert button
Picture for Jiashi Feng

Jiashi Feng

Alert button

How Well Self-Supervised Pre-Training Performs with Streaming Data?

Add code
Bookmark button
Alert button
Apr 25, 2021
Dapeng Hu, Qizhengqiu Lu, Lanqing Hong, Hailin Hu, Yifan Zhang, Zhenguo Li, Alfred Shen, Jiashi Feng

Figure 1 for How Well Self-Supervised Pre-Training Performs with Streaming Data?
Figure 2 for How Well Self-Supervised Pre-Training Performs with Streaming Data?
Figure 3 for How Well Self-Supervised Pre-Training Performs with Streaming Data?
Figure 4 for How Well Self-Supervised Pre-Training Performs with Streaming Data?
Viaarxiv icon

Token Labeling: Training a 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet

Add code
Bookmark button
Alert button
Apr 23, 2021
Zihang Jiang, Qibin Hou, Li Yuan, Daquan Zhou, Xiaojie Jin, Anran Wang, Jiashi Feng

Figure 1 for Token Labeling: Training a 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet
Figure 2 for Token Labeling: Training a 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet
Figure 3 for Token Labeling: Training a 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet
Figure 4 for Token Labeling: Training a 85.4% Top-1 Accuracy Vision Transformer with 56M Parameters on ImageNet
Viaarxiv icon

DeepViT: Towards Deeper Vision Transformer

Add code
Bookmark button
Alert button
Apr 19, 2021
Daquan Zhou, Bingyi Kang, Xiaojie Jin, Linjie Yang, Xiaochen Lian, Zihang Jiang, Qibin Hou, Jiashi Feng

Figure 1 for DeepViT: Towards Deeper Vision Transformer
Figure 2 for DeepViT: Towards Deeper Vision Transformer
Figure 3 for DeepViT: Towards Deeper Vision Transformer
Figure 4 for DeepViT: Towards Deeper Vision Transformer
Viaarxiv icon

Distill and Fine-tune: Effective Adaptation from a Black-box Source Model

Add code
Bookmark button
Alert button
Apr 04, 2021
Jian Liang, Dapeng Hu, Ran He, Jiashi Feng

Figure 1 for Distill and Fine-tune: Effective Adaptation from a Black-box Source Model
Figure 2 for Distill and Fine-tune: Effective Adaptation from a Black-box Source Model
Figure 3 for Distill and Fine-tune: Effective Adaptation from a Black-box Source Model
Figure 4 for Distill and Fine-tune: Effective Adaptation from a Black-box Source Model
Viaarxiv icon

Augmented Transformer with Adaptive Graph for Temporal Action Proposal Generation

Add code
Bookmark button
Alert button
Mar 30, 2021
Shuning Chang, Pichao Wang, Fan Wang, Hao Li, Jiashi Feng

Figure 1 for Augmented Transformer with Adaptive Graph for Temporal Action Proposal Generation
Figure 2 for Augmented Transformer with Adaptive Graph for Temporal Action Proposal Generation
Figure 3 for Augmented Transformer with Adaptive Graph for Temporal Action Proposal Generation
Figure 4 for Augmented Transformer with Adaptive Graph for Temporal Action Proposal Generation
Viaarxiv icon

AutoSpace: Neural Architecture Search with Less Human Interference

Add code
Bookmark button
Alert button
Mar 22, 2021
Daquan Zhou, Xiaojie Jin, Xiaochen Lian, Linjie Yang, Yujing Xue, Qibin Hou, Jiashi Feng

Figure 1 for AutoSpace: Neural Architecture Search with Less Human Interference
Figure 2 for AutoSpace: Neural Architecture Search with Less Human Interference
Figure 3 for AutoSpace: Neural Architecture Search with Less Human Interference
Figure 4 for AutoSpace: Neural Architecture Search with Less Human Interference
Viaarxiv icon

Coordinate Attention for Efficient Mobile Network Design

Add code
Bookmark button
Alert button
Mar 04, 2021
Qibin Hou, Daquan Zhou, Jiashi Feng

Figure 1 for Coordinate Attention for Efficient Mobile Network Design
Figure 2 for Coordinate Attention for Efficient Mobile Network Design
Figure 3 for Coordinate Attention for Efficient Mobile Network Design
Figure 4 for Coordinate Attention for Efficient Mobile Network Design
Viaarxiv icon

Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning

Add code
Bookmark button
Alert button
Feb 12, 2021
Yifan Zhang, Bryan Hooi, Dapeng Hu, Jian Liang, Jiashi Feng

Figure 1 for Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning
Figure 2 for Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning
Figure 3 for Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning
Figure 4 for Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning
Viaarxiv icon

CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection

Add code
Bookmark button
Alert button
Feb 10, 2021
Hanshu Yan, Jingfeng Zhang, Gang Niu, Jiashi Feng, Vincent Y. F. Tan, Masashi Sugiyama

Figure 1 for CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection
Figure 2 for CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection
Figure 3 for CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection
Figure 4 for CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection
Viaarxiv icon

Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet

Add code
Bookmark button
Alert button
Jan 28, 2021
Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, Francis EH Tay, Jiashi Feng, Shuicheng Yan

Figure 1 for Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
Figure 2 for Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
Figure 3 for Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
Figure 4 for Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
Viaarxiv icon