Picture for Piotr Bojanowski

Piotr Bojanowski

WILLOW, LIENS

Think Before You Act: Unified Policy for Interleaving Language Reasoning with Actions

Add code
Apr 18, 2023
Figure 1 for Think Before You Act: Unified Policy for Interleaving Language Reasoning with Actions
Figure 2 for Think Before You Act: Unified Policy for Interleaving Language Reasoning with Actions
Figure 3 for Think Before You Act: Unified Policy for Interleaving Language Reasoning with Actions
Figure 4 for Think Before You Act: Unified Policy for Interleaving Language Reasoning with Actions
Viaarxiv icon

Sub-meter resolution canopy height maps using self-supervised learning and a vision transformer trained on Aerial and GEDI Lidar

Add code
Apr 17, 2023
Figure 1 for Sub-meter resolution canopy height maps using self-supervised learning and a vision transformer trained on Aerial and GEDI Lidar
Figure 2 for Sub-meter resolution canopy height maps using self-supervised learning and a vision transformer trained on Aerial and GEDI Lidar
Figure 3 for Sub-meter resolution canopy height maps using self-supervised learning and a vision transformer trained on Aerial and GEDI Lidar
Figure 4 for Sub-meter resolution canopy height maps using self-supervised learning and a vision transformer trained on Aerial and GEDI Lidar
Viaarxiv icon

DINOv2: Learning Robust Visual Features without Supervision

Add code
Apr 14, 2023
Figure 1 for DINOv2: Learning Robust Visual Features without Supervision
Figure 2 for DINOv2: Learning Robust Visual Features without Supervision
Figure 3 for DINOv2: Learning Robust Visual Features without Supervision
Figure 4 for DINOv2: Learning Robust Visual Features without Supervision
Viaarxiv icon

Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture

Add code
Jan 19, 2023
Figure 1 for Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture
Figure 2 for Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture
Figure 3 for Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture
Figure 4 for Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture
Viaarxiv icon

Learning Goal-Conditioned Policies Offline with Self-Supervised Reward Shaping

Add code
Jan 05, 2023
Figure 1 for Learning Goal-Conditioned Policies Offline with Self-Supervised Reward Shaping
Figure 2 for Learning Goal-Conditioned Policies Offline with Self-Supervised Reward Shaping
Figure 3 for Learning Goal-Conditioned Policies Offline with Self-Supervised Reward Shaping
Figure 4 for Learning Goal-Conditioned Policies Offline with Self-Supervised Reward Shaping
Viaarxiv icon

Co-training $2^L$ Submodels for Visual Recognition

Add code
Dec 09, 2022
Viaarxiv icon

The Hidden Uniform Cluster Prior in Self-Supervised Learning

Add code
Oct 13, 2022
Figure 1 for The Hidden Uniform Cluster Prior in Self-Supervised Learning
Figure 2 for The Hidden Uniform Cluster Prior in Self-Supervised Learning
Figure 3 for The Hidden Uniform Cluster Prior in Self-Supervised Learning
Figure 4 for The Hidden Uniform Cluster Prior in Self-Supervised Learning
Viaarxiv icon

Walk the Random Walk: Learning to Discover and Reach Goals Without Supervision

Add code
Jun 23, 2022
Figure 1 for Walk the Random Walk: Learning to Discover and Reach Goals Without Supervision
Figure 2 for Walk the Random Walk: Learning to Discover and Reach Goals Without Supervision
Figure 3 for Walk the Random Walk: Learning to Discover and Reach Goals Without Supervision
Figure 4 for Walk the Random Walk: Learning to Discover and Reach Goals Without Supervision
Viaarxiv icon

Masked Siamese Networks for Label-Efficient Learning

Add code
Apr 14, 2022
Figure 1 for Masked Siamese Networks for Label-Efficient Learning
Figure 2 for Masked Siamese Networks for Label-Efficient Learning
Figure 3 for Masked Siamese Networks for Label-Efficient Learning
Figure 4 for Masked Siamese Networks for Label-Efficient Learning
Viaarxiv icon

Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision

Add code
Feb 22, 2022
Figure 1 for Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Figure 2 for Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Figure 3 for Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Figure 4 for Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Viaarxiv icon