Alert button
Picture for Xinlei Chen

Xinlei Chen

Alert button

Exploring Long-Sequence Masked Autoencoders

Oct 13, 2022
Ronghang Hu, Shoubhik Debnath, Saining Xie, Xinlei Chen

Figure 1 for Exploring Long-Sequence Masked Autoencoders
Figure 2 for Exploring Long-Sequence Masked Autoencoders
Figure 3 for Exploring Long-Sequence Masked Autoencoders
Figure 4 for Exploring Long-Sequence Masked Autoencoders
Viaarxiv icon

Test-Time Training with Masked Autoencoders

Sep 15, 2022
Yossi Gandelsman, Yu Sun, Xinlei Chen, Alexei A. Efros

Figure 1 for Test-Time Training with Masked Autoencoders
Figure 2 for Test-Time Training with Masked Autoencoders
Figure 3 for Test-Time Training with Masked Autoencoders
Figure 4 for Test-Time Training with Masked Autoencoders
Viaarxiv icon

On the Importance of Asymmetry for Siamese Representation Learning

Apr 01, 2022
Xiao Wang, Haoqi Fan, Yuandong Tian, Daisuke Kihara, Xinlei Chen

Figure 1 for On the Importance of Asymmetry for Siamese Representation Learning
Figure 2 for On the Importance of Asymmetry for Siamese Representation Learning
Figure 3 for On the Importance of Asymmetry for Siamese Representation Learning
Figure 4 for On the Importance of Asymmetry for Siamese Representation Learning
Viaarxiv icon

LoopITR: Combining Dual and Cross Encoder Architectures for Image-Text Retrieval

Mar 10, 2022
Jie Lei, Xinlei Chen, Ning Zhang, Mengjiao Wang, Mohit Bansal, Tamara L. Berg, Licheng Yu

Figure 1 for LoopITR: Combining Dual and Cross Encoder Architectures for Image-Text Retrieval
Figure 2 for LoopITR: Combining Dual and Cross Encoder Architectures for Image-Text Retrieval
Figure 3 for LoopITR: Combining Dual and Cross Encoder Architectures for Image-Text Retrieval
Figure 4 for LoopITR: Combining Dual and Cross Encoder Architectures for Image-Text Retrieval
Viaarxiv icon

Point-Level Region Contrast for Object Detection Pre-Training

Feb 09, 2022
Yutong Bai, Xinlei Chen, Alexander Kirillov, Alan Yuille, Alexander C. Berg

Figure 1 for Point-Level Region Contrast for Object Detection Pre-Training
Figure 2 for Point-Level Region Contrast for Object Detection Pre-Training
Figure 3 for Point-Level Region Contrast for Object Detection Pre-Training
Figure 4 for Point-Level Region Contrast for Object Detection Pre-Training
Viaarxiv icon

Masked Autoencoders Are Scalable Vision Learners

Dec 02, 2021
Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick

Figure 1 for Masked Autoencoders Are Scalable Vision Learners
Figure 2 for Masked Autoencoders Are Scalable Vision Learners
Figure 3 for Masked Autoencoders Are Scalable Vision Learners
Figure 4 for Masked Autoencoders Are Scalable Vision Learners
Viaarxiv icon

Benchmarking Detection Transfer Learning with Vision Transformers

Nov 22, 2021
Yanghao Li, Saining Xie, Xinlei Chen, Piotr Dollar, Kaiming He, Ross Girshick

Figure 1 for Benchmarking Detection Transfer Learning with Vision Transformers
Figure 2 for Benchmarking Detection Transfer Learning with Vision Transformers
Figure 3 for Benchmarking Detection Transfer Learning with Vision Transformers
Figure 4 for Benchmarking Detection Transfer Learning with Vision Transformers
Viaarxiv icon

Towards Demystifying Representation Learning with Non-contrastive Self-supervision

Oct 11, 2021
Xiang Wang, Xinlei Chen, Simon S. Du, Yuandong Tian

Figure 1 for Towards Demystifying Representation Learning with Non-contrastive Self-supervision
Figure 2 for Towards Demystifying Representation Learning with Non-contrastive Self-supervision
Figure 3 for Towards Demystifying Representation Learning with Non-contrastive Self-supervision
Figure 4 for Towards Demystifying Representation Learning with Non-contrastive Self-supervision
Viaarxiv icon

An Empirical Study of Training Self-Supervised Vision Transformers

May 05, 2021
Xinlei Chen, Saining Xie, Kaiming He

Figure 1 for An Empirical Study of Training Self-Supervised Vision Transformers
Figure 2 for An Empirical Study of Training Self-Supervised Vision Transformers
Figure 3 for An Empirical Study of Training Self-Supervised Vision Transformers
Figure 4 for An Empirical Study of Training Self-Supervised Vision Transformers
Viaarxiv icon