Alert button
Picture for Vincent Lepetit

Vincent Lepetit

Alert button

Geometric Correspondence Fields: Learned Differentiable Rendering for 3D Pose Refinement in the Wild

Add code
Bookmark button
Alert button
Jul 17, 2020
Alexander Grabner, Yaming Wang, Peizhao Zhang, Peihong Guo, Tong Xiao, Peter Vajda, Peter M. Roth, Vincent Lepetit

Figure 1 for Geometric Correspondence Fields: Learned Differentiable Rendering for 3D Pose Refinement in the Wild
Figure 2 for Geometric Correspondence Fields: Learned Differentiable Rendering for 3D Pose Refinement in the Wild
Figure 3 for Geometric Correspondence Fields: Learned Differentiable Rendering for 3D Pose Refinement in the Wild
Figure 4 for Geometric Correspondence Fields: Learned Differentiable Rendering for 3D Pose Refinement in the Wild
Viaarxiv icon

Recent Advances in 3D Object and Hand Pose Estimation

Add code
Bookmark button
Alert button
Jun 10, 2020
Vincent Lepetit

Figure 1 for Recent Advances in 3D Object and Hand Pose Estimation
Figure 2 for Recent Advances in 3D Object and Hand Pose Estimation
Figure 3 for Recent Advances in 3D Object and Hand Pose Estimation
Figure 4 for Recent Advances in 3D Object and Hand Pose Estimation
Viaarxiv icon

ALCN: Adaptive Local Contrast Normalization

Add code
Bookmark button
Alert button
Apr 15, 2020
Mahdi Rad, Peter M. Roth, Vincent Lepetit

Figure 1 for ALCN: Adaptive Local Contrast Normalization
Figure 2 for ALCN: Adaptive Local Contrast Normalization
Figure 3 for ALCN: Adaptive Local Contrast Normalization
Figure 4 for ALCN: Adaptive Local Contrast Normalization
Viaarxiv icon

S2DNet: Learning Accurate Correspondences for Sparse-to-Dense Feature Matching

Add code
Bookmark button
Alert button
Apr 03, 2020
Hugo Germain, Guillaume Bourmaud, Vincent Lepetit

Figure 1 for S2DNet: Learning Accurate Correspondences for Sparse-to-Dense Feature Matching
Figure 2 for S2DNet: Learning Accurate Correspondences for Sparse-to-Dense Feature Matching
Figure 3 for S2DNet: Learning Accurate Correspondences for Sparse-to-Dense Feature Matching
Figure 4 for S2DNet: Learning Accurate Correspondences for Sparse-to-Dense Feature Matching
Viaarxiv icon

Measuring Generalisation to Unseen Viewpoints, Articulations, Shapes and Objects for 3D Hand Pose Estimation under Hand-Object Interaction

Add code
Bookmark button
Alert button
Mar 30, 2020
Anil Armagan, Guillermo Garcia-Hernando, Seungryul Baek, Shreyas Hampali, Mahdi Rad, Zhaohui Zhang, Shipeng Xie, MingXiu Chen, Boshen Zhang, Fu Xiong, Yang Xiao, Zhiguo Cao, Junsong Yuan, Pengfei Ren, Weiting Huang, Haifeng Sun, Marek Hrúz, Jakub Kanis, Zdeněk Krňoul, Qingfu Wan, Shile Li, Linlin Yang, Dongheui Lee, Angela Yao, Weiguo Zhou, Sijia Mei, Yunhui Liu, Adrian Spurr, Umar Iqbal, Pavlo Molchanov, Philippe Weinzaepfel, Romain Brégier, Gregory Rogez, Vincent Lepetit, Tae-Kyun Kim

Figure 1 for Measuring Generalisation to Unseen Viewpoints, Articulations, Shapes and Objects for 3D Hand Pose Estimation under Hand-Object Interaction
Figure 2 for Measuring Generalisation to Unseen Viewpoints, Articulations, Shapes and Objects for 3D Hand Pose Estimation under Hand-Object Interaction
Figure 3 for Measuring Generalisation to Unseen Viewpoints, Articulations, Shapes and Objects for 3D Hand Pose Estimation under Hand-Object Interaction
Figure 4 for Measuring Generalisation to Unseen Viewpoints, Articulations, Shapes and Objects for 3D Hand Pose Estimation under Hand-Object Interaction
Viaarxiv icon

Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields

Add code
Bookmark button
Alert button
Feb 28, 2020
Michael Ramamonjisoa, Yuming Du, Vincent Lepetit

Figure 1 for Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields
Figure 2 for Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields
Figure 3 for Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields
Figure 4 for Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields
Viaarxiv icon

General 3D Room Layout from a Single View by Render-and-Compare

Add code
Bookmark button
Alert button
Jan 07, 2020
Sinisa Stekovic, Friedrich Fraundorfer, Vincent Lepetit

Figure 1 for General 3D Room Layout from a Single View by Render-and-Compare
Figure 2 for General 3D Room Layout from a Single View by Render-and-Compare
Figure 3 for General 3D Room Layout from a Single View by Render-and-Compare
Figure 4 for General 3D Room Layout from a Single View by Render-and-Compare
Viaarxiv icon

AssemblyNet: A large ensemble of CNNs for 3D Whole Brain MRI Segmentation

Add code
Bookmark button
Alert button
Nov 20, 2019
Pierrick Coupé, Boris Mansencal, Michaël Clément, Rémi Giraud, Baudouin Denis de Senneville, Vinh-Thong Ta, Vincent Lepetit, José V. Manjon

Figure 1 for AssemblyNet: A large ensemble of CNNs for 3D Whole Brain MRI Segmentation
Figure 2 for AssemblyNet: A large ensemble of CNNs for 3D Whole Brain MRI Segmentation
Figure 3 for AssemblyNet: A large ensemble of CNNs for 3D Whole Brain MRI Segmentation
Figure 4 for AssemblyNet: A large ensemble of CNNs for 3D Whole Brain MRI Segmentation
Viaarxiv icon

Smart Hypothesis Generation for Efficient and Robust Room Layout Estimation

Add code
Bookmark button
Alert button
Oct 27, 2019
Martin Hirzer, Peter M. Roth, Vincent Lepetit

Figure 1 for Smart Hypothesis Generation for Efficient and Robust Room Layout Estimation
Figure 2 for Smart Hypothesis Generation for Efficient and Robust Room Layout Estimation
Figure 3 for Smart Hypothesis Generation for Efficient and Robust Room Layout Estimation
Figure 4 for Smart Hypothesis Generation for Efficient and Robust Room Layout Estimation
Viaarxiv icon

LU-Net: An Efficient Network for 3D LiDAR Point Cloud Semantic Segmentation Based on End-to-End-Learned 3D Features and U-Net

Add code
Bookmark button
Alert button
Aug 30, 2019
Pierre Biasutti, Vincent Lepetit, Jean-François Aujol, Mathieu Brédif, Aurélie Bugeau

Figure 1 for LU-Net: An Efficient Network for 3D LiDAR Point Cloud Semantic Segmentation Based on End-to-End-Learned 3D Features and U-Net
Figure 2 for LU-Net: An Efficient Network for 3D LiDAR Point Cloud Semantic Segmentation Based on End-to-End-Learned 3D Features and U-Net
Figure 3 for LU-Net: An Efficient Network for 3D LiDAR Point Cloud Semantic Segmentation Based on End-to-End-Learned 3D Features and U-Net
Figure 4 for LU-Net: An Efficient Network for 3D LiDAR Point Cloud Semantic Segmentation Based on End-to-End-Learned 3D Features and U-Net
Viaarxiv icon