Alert button
Picture for Jitendra Malik

Jitendra Malik

Alert button

Coupling Vision and Proprioception for Navigation of Legged Robots

Add code
Bookmark button
Alert button
Dec 03, 2021
Zipeng Fu, Ashish Kumar, Ananye Agarwal, Haozhi Qi, Jitendra Malik, Deepak Pathak

Figure 1 for Coupling Vision and Proprioception for Navigation of Legged Robots
Figure 2 for Coupling Vision and Proprioception for Navigation of Legged Robots
Figure 3 for Coupling Vision and Proprioception for Navigation of Legged Robots
Figure 4 for Coupling Vision and Proprioception for Navigation of Legged Robots
Viaarxiv icon

Improved Multiscale Vision Transformers for Classification and Detection

Add code
Bookmark button
Alert button
Dec 02, 2021
Yanghao Li, Chao-Yuan Wu, Haoqi Fan, Karttikeya Mangalam, Bo Xiong, Jitendra Malik, Christoph Feichtenhofer

Figure 1 for Improved Multiscale Vision Transformers for Classification and Detection
Figure 2 for Improved Multiscale Vision Transformers for Classification and Detection
Figure 3 for Improved Multiscale Vision Transformers for Classification and Detection
Figure 4 for Improved Multiscale Vision Transformers for Classification and Detection
Viaarxiv icon

Differentiable Spatial Planning using Transformers

Add code
Bookmark button
Alert button
Dec 02, 2021
Devendra Singh Chaplot, Deepak Pathak, Jitendra Malik

Figure 1 for Differentiable Spatial Planning using Transformers
Figure 2 for Differentiable Spatial Planning using Transformers
Figure 3 for Differentiable Spatial Planning using Transformers
Figure 4 for Differentiable Spatial Planning using Transformers
Viaarxiv icon

SEAL: Self-supervised Embodied Active Learning using Exploration and 3D Consistency

Add code
Bookmark button
Alert button
Dec 02, 2021
Devendra Singh Chaplot, Murtaza Dalal, Saurabh Gupta, Jitendra Malik, Ruslan Salakhutdinov

Figure 1 for SEAL: Self-supervised Embodied Active Learning using Exploration and 3D Consistency
Figure 2 for SEAL: Self-supervised Embodied Active Learning using Exploration and 3D Consistency
Figure 3 for SEAL: Self-supervised Embodied Active Learning using Exploration and 3D Consistency
Figure 4 for SEAL: Self-supervised Embodied Active Learning using Exploration and 3D Consistency
Viaarxiv icon

PyTorchVideo: A Deep Learning Library for Video Understanding

Add code
Bookmark button
Alert button
Nov 18, 2021
Haoqi Fan, Tullie Murrell, Heng Wang, Kalyan Vasudev Alwala, Yanghao Li, Yilei Li, Bo Xiong, Nikhila Ravi, Meng Li, Haichuan Yang, Jitendra Malik, Ross Girshick, Matt Feiszli, Aaron Adcock, Wan-Yen Lo, Christoph Feichtenhofer

Figure 1 for PyTorchVideo: A Deep Learning Library for Video Understanding
Figure 2 for PyTorchVideo: A Deep Learning Library for Video Understanding
Figure 3 for PyTorchVideo: A Deep Learning Library for Video Understanding
Viaarxiv icon

Tracking People with 3D Representations

Add code
Bookmark button
Alert button
Nov 15, 2021
Jathushan Rajasegaran, Georgios Pavlakos, Angjoo Kanazawa, Jitendra Malik

Figure 1 for Tracking People with 3D Representations
Figure 2 for Tracking People with 3D Representations
Figure 3 for Tracking People with 3D Representations
Figure 4 for Tracking People with 3D Representations
Viaarxiv icon

Minimizing Energy Consumption Leads to the Emergence of Gaits in Legged Robots

Add code
Bookmark button
Alert button
Oct 25, 2021
Zipeng Fu, Ashish Kumar, Jitendra Malik, Deepak Pathak

Figure 1 for Minimizing Energy Consumption Leads to the Emergence of Gaits in Legged Robots
Figure 2 for Minimizing Energy Consumption Leads to the Emergence of Gaits in Legged Robots
Figure 3 for Minimizing Energy Consumption Leads to the Emergence of Gaits in Legged Robots
Figure 4 for Minimizing Energy Consumption Leads to the Emergence of Gaits in Legged Robots
Viaarxiv icon

Ego4D: Around the World in 3,000 Hours of Egocentric Video

Add code
Bookmark button
Alert button
Oct 13, 2021
Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Christian Fuegen, Abrham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei Huang, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik

Figure 1 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 2 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 3 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 4 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Viaarxiv icon

ABO: Dataset and Benchmarks for Real-World 3D Object Understanding

Add code
Bookmark button
Alert button
Oct 12, 2021
Jasmine Collins, Shubham Goel, Achleshwar Luthra, Leon Xu, Kenan Deng, Xi Zhang, Tomas F. Yago Vicente, Himanshu Arora, Thomas Dideriksen, Matthieu Guillaumin, Jitendra Malik

Figure 1 for ABO: Dataset and Benchmarks for Real-World 3D Object Understanding
Figure 2 for ABO: Dataset and Benchmarks for Real-World 3D Object Understanding
Figure 3 for ABO: Dataset and Benchmarks for Real-World 3D Object Understanding
Figure 4 for ABO: Dataset and Benchmarks for Real-World 3D Object Understanding
Viaarxiv icon

Differentiable Stereopsis: Meshes from multiple views using differentiable rendering

Add code
Bookmark button
Alert button
Oct 11, 2021
Shubham Goel, Georgia Gkioxari, Jitendra Malik

Figure 1 for Differentiable Stereopsis: Meshes from multiple views using differentiable rendering
Figure 2 for Differentiable Stereopsis: Meshes from multiple views using differentiable rendering
Figure 3 for Differentiable Stereopsis: Meshes from multiple views using differentiable rendering
Figure 4 for Differentiable Stereopsis: Meshes from multiple views using differentiable rendering
Viaarxiv icon