Picture for Xinlei Chen

Xinlei Chen

Learning to (Learn at Test Time)

Add code
Oct 20, 2023
Figure 1 for Learning to (Learn at Test Time)
Figure 2 for Learning to (Learn at Test Time)
Figure 3 for Learning to (Learn at Test Time)
Figure 4 for Learning to (Learn at Test Time)
Viaarxiv icon

Test-Time Training on Video Streams

Add code
Jul 12, 2023
Figure 1 for Test-Time Training on Video Streams
Figure 2 for Test-Time Training on Video Streams
Figure 3 for Test-Time Training on Video Streams
Figure 4 for Test-Time Training on Video Streams
Viaarxiv icon

Path Generation for Wheeled Robots Autonomous Navigation on Vegetated Terrain

Add code
Jun 15, 2023
Figure 1 for Path Generation for Wheeled Robots Autonomous Navigation on Vegetated Terrain
Figure 2 for Path Generation for Wheeled Robots Autonomous Navigation on Vegetated Terrain
Figure 3 for Path Generation for Wheeled Robots Autonomous Navigation on Vegetated Terrain
Figure 4 for Path Generation for Wheeled Robots Autonomous Navigation on Vegetated Terrain
Viaarxiv icon

Improving Selective Visual Question Answering by Learning from Your Peers

Add code
Jun 14, 2023
Figure 1 for Improving Selective Visual Question Answering by Learning from Your Peers
Figure 2 for Improving Selective Visual Question Answering by Learning from Your Peers
Figure 3 for Improving Selective Visual Question Answering by Learning from Your Peers
Figure 4 for Improving Selective Visual Question Answering by Learning from Your Peers
Viaarxiv icon

R-MAE: Regions Meet Masked Autoencoders

Add code
Jun 08, 2023
Viaarxiv icon

ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders

Add code
Jan 02, 2023
Figure 1 for ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders
Figure 2 for ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders
Figure 3 for ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders
Figure 4 for ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders
Viaarxiv icon

UniT3D: A Unified Transformer for 3D Dense Captioning and Visual Grounding

Add code
Dec 01, 2022
Figure 1 for UniT3D: A Unified Transformer for 3D Dense Captioning and Visual Grounding
Figure 2 for UniT3D: A Unified Transformer for 3D Dense Captioning and Visual Grounding
Figure 3 for UniT3D: A Unified Transformer for 3D Dense Captioning and Visual Grounding
Figure 4 for UniT3D: A Unified Transformer for 3D Dense Captioning and Visual Grounding
Viaarxiv icon

EurNet: Efficient Multi-Range Relational Modeling of Spatial Multi-Relational Data

Add code
Nov 23, 2022
Viaarxiv icon

Exploring Long-Sequence Masked Autoencoders

Add code
Oct 13, 2022
Figure 1 for Exploring Long-Sequence Masked Autoencoders
Figure 2 for Exploring Long-Sequence Masked Autoencoders
Figure 3 for Exploring Long-Sequence Masked Autoencoders
Figure 4 for Exploring Long-Sequence Masked Autoencoders
Viaarxiv icon

Test-Time Training with Masked Autoencoders

Add code
Sep 15, 2022
Figure 1 for Test-Time Training with Masked Autoencoders
Figure 2 for Test-Time Training with Masked Autoencoders
Figure 3 for Test-Time Training with Masked Autoencoders
Figure 4 for Test-Time Training with Masked Autoencoders
Viaarxiv icon