Picture for Juan Nieto

Juan Nieto

ETH Zürich

Volumetric Instance-Aware Semantic Mapping and 3D Object Discovery

Add code
Mar 01, 2019
Figure 1 for Volumetric Instance-Aware Semantic Mapping and 3D Object Discovery
Figure 2 for Volumetric Instance-Aware Semantic Mapping and 3D Object Discovery
Figure 3 for Volumetric Instance-Aware Semantic Mapping and 3D Object Discovery
Figure 4 for Volumetric Instance-Aware Semantic Mapping and 3D Object Discovery
Viaarxiv icon

Informative Path Planning and Mapping for Active Sensing Under Localization Uncertainty

Add code
Feb 25, 2019
Figure 1 for Informative Path Planning and Mapping for Active Sensing Under Localization Uncertainty
Figure 2 for Informative Path Planning and Mapping for Active Sensing Under Localization Uncertainty
Figure 3 for Informative Path Planning and Mapping for Active Sensing Under Localization Uncertainty
Figure 4 for Informative Path Planning and Mapping for Active Sensing Under Localization Uncertainty
Viaarxiv icon

VIZARD: Reliable Visual Localization for Autonomous Vehicles in Urban Outdoor Environments

Add code
Feb 12, 2019
Figure 1 for VIZARD: Reliable Visual Localization for Autonomous Vehicles in Urban Outdoor Environments
Figure 2 for VIZARD: Reliable Visual Localization for Autonomous Vehicles in Urban Outdoor Environments
Figure 3 for VIZARD: Reliable Visual Localization for Autonomous Vehicles in Urban Outdoor Environments
Figure 4 for VIZARD: Reliable Visual Localization for Autonomous Vehicles in Urban Outdoor Environments
Viaarxiv icon

Comparing Task Simplifications to Learn Closed-Loop Object Picking Using Deep Reinforcement Learning

Add code
Jan 31, 2019
Figure 1 for Comparing Task Simplifications to Learn Closed-Loop Object Picking Using Deep Reinforcement Learning
Figure 2 for Comparing Task Simplifications to Learn Closed-Loop Object Picking Using Deep Reinforcement Learning
Figure 3 for Comparing Task Simplifications to Learn Closed-Loop Object Picking Using Deep Reinforcement Learning
Figure 4 for Comparing Task Simplifications to Learn Closed-Loop Object Picking Using Deep Reinforcement Learning
Viaarxiv icon

Observability-aware Self-Calibration of Visual and Inertial Sensors for Ego-Motion Estimation

Add code
Jan 22, 2019
Figure 1 for Observability-aware Self-Calibration of Visual and Inertial Sensors for Ego-Motion Estimation
Figure 2 for Observability-aware Self-Calibration of Visual and Inertial Sensors for Ego-Motion Estimation
Figure 3 for Observability-aware Self-Calibration of Visual and Inertial Sensors for Ego-Motion Estimation
Figure 4 for Observability-aware Self-Calibration of Visual and Inertial Sensors for Ego-Motion Estimation
Viaarxiv icon

SegMatch: Segment based loop-closure for 3D point clouds

Add code
Jan 15, 2019
Figure 1 for SegMatch: Segment based loop-closure for 3D point clouds
Figure 2 for SegMatch: Segment based loop-closure for 3D point clouds
Figure 3 for SegMatch: Segment based loop-closure for 3D point clouds
Figure 4 for SegMatch: Segment based loop-closure for 3D point clouds
Viaarxiv icon

SegMap: 3D Segment Mapping using Data-Driven Descriptors

Add code
Jan 15, 2019
Figure 1 for SegMap: 3D Segment Mapping using Data-Driven Descriptors
Figure 2 for SegMap: 3D Segment Mapping using Data-Driven Descriptors
Figure 3 for SegMap: 3D Segment Mapping using Data-Driven Descriptors
Figure 4 for SegMap: 3D Segment Mapping using Data-Driven Descriptors
Viaarxiv icon

A Complete System for Vision-Based Micro-Aerial Vehicle Mapping, Planning, and Flight in Cluttered Environments

Add code
Dec 10, 2018
Figure 1 for A Complete System for Vision-Based Micro-Aerial Vehicle Mapping, Planning, and Flight in Cluttered Environments
Figure 2 for A Complete System for Vision-Based Micro-Aerial Vehicle Mapping, Planning, and Flight in Cluttered Environments
Figure 3 for A Complete System for Vision-Based Micro-Aerial Vehicle Mapping, Planning, and Flight in Cluttered Environments
Figure 4 for A Complete System for Vision-Based Micro-Aerial Vehicle Mapping, Planning, and Flight in Cluttered Environments
Viaarxiv icon

From Perception to Decision: A Data-driven Approach to End-to-end Motion Planning for Autonomous Ground Robots

Add code
Nov 06, 2018
Figure 1 for From Perception to Decision: A Data-driven Approach to End-to-end Motion Planning for Autonomous Ground Robots
Figure 2 for From Perception to Decision: A Data-driven Approach to End-to-end Motion Planning for Autonomous Ground Robots
Figure 3 for From Perception to Decision: A Data-driven Approach to End-to-end Motion Planning for Autonomous Ground Robots
Figure 4 for From Perception to Decision: A Data-driven Approach to End-to-end Motion Planning for Autonomous Ground Robots
Viaarxiv icon

C-blox: A Scalable and Consistent TSDF-based Dense Mapping Approach

Add code
Sep 25, 2018
Figure 1 for C-blox: A Scalable and Consistent TSDF-based Dense Mapping Approach
Figure 2 for C-blox: A Scalable and Consistent TSDF-based Dense Mapping Approach
Figure 3 for C-blox: A Scalable and Consistent TSDF-based Dense Mapping Approach
Figure 4 for C-blox: A Scalable and Consistent TSDF-based Dense Mapping Approach
Viaarxiv icon