CLAP: Contrastive Latent Action Pretraining for Learning Vision-Language-Action Models from Human Videos

Add code
Jan 07, 2026

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: