Picture for Roland Siegwart

Roland Siegwart

ETH Zürich

Depth Completion in Unseen Field Robotics Environments Using Extremely Sparse Depth Measurements

Add code
Feb 03, 2026
Viaarxiv icon

Discontinuity-aware Normal Integration for Generic Central Camera Models

Add code
Jul 08, 2025
Figure 1 for Discontinuity-aware Normal Integration for Generic Central Camera Models
Figure 2 for Discontinuity-aware Normal Integration for Generic Central Camera Models
Figure 3 for Discontinuity-aware Normal Integration for Generic Central Camera Models
Figure 4 for Discontinuity-aware Normal Integration for Generic Central Camera Models
Viaarxiv icon

CompSLAM: Complementary Hierarchical Multi-Modal Localization and Mapping for Robot Autonomy in Underground Environments

Add code
May 10, 2025
Viaarxiv icon

Towards Open-Source and Modular Space Systems with ATMOS

Add code
Jan 28, 2025
Figure 1 for Towards Open-Source and Modular Space Systems with ATMOS
Figure 2 for Towards Open-Source and Modular Space Systems with ATMOS
Figure 3 for Towards Open-Source and Modular Space Systems with ATMOS
Figure 4 for Towards Open-Source and Modular Space Systems with ATMOS
Viaarxiv icon

Learning Affordances from Interactive Exploration using an Object-level Map

Add code
Jan 10, 2025
Figure 1 for Learning Affordances from Interactive Exploration using an Object-level Map
Figure 2 for Learning Affordances from Interactive Exploration using an Object-level Map
Figure 3 for Learning Affordances from Interactive Exploration using an Object-level Map
Figure 4 for Learning Affordances from Interactive Exploration using an Object-level Map
Viaarxiv icon

Allocation for Omnidirectional Aerial Robots: Incorporating Power Dynamics

Add code
Dec 20, 2024
Figure 1 for Allocation for Omnidirectional Aerial Robots: Incorporating Power Dynamics
Figure 2 for Allocation for Omnidirectional Aerial Robots: Incorporating Power Dynamics
Figure 3 for Allocation for Omnidirectional Aerial Robots: Incorporating Power Dynamics
Figure 4 for Allocation for Omnidirectional Aerial Robots: Incorporating Power Dynamics
Viaarxiv icon

Evaluation of Human-Robot Interfaces based on 2D/3D Visual and Haptic Feedback for Aerial Manipulation

Add code
Oct 20, 2024
Figure 1 for Evaluation of Human-Robot Interfaces based on 2D/3D Visual and Haptic Feedback for Aerial Manipulation
Figure 2 for Evaluation of Human-Robot Interfaces based on 2D/3D Visual and Haptic Feedback for Aerial Manipulation
Figure 3 for Evaluation of Human-Robot Interfaces based on 2D/3D Visual and Haptic Feedback for Aerial Manipulation
Figure 4 for Evaluation of Human-Robot Interfaces based on 2D/3D Visual and Haptic Feedback for Aerial Manipulation
Viaarxiv icon

Radar Meets Vision: Robustifying Monocular Metric Depth Prediction for Mobile Robotics

Add code
Oct 01, 2024
Figure 1 for Radar Meets Vision: Robustifying Monocular Metric Depth Prediction for Mobile Robotics
Figure 2 for Radar Meets Vision: Robustifying Monocular Metric Depth Prediction for Mobile Robotics
Figure 3 for Radar Meets Vision: Robustifying Monocular Metric Depth Prediction for Mobile Robotics
Figure 4 for Radar Meets Vision: Robustifying Monocular Metric Depth Prediction for Mobile Robotics
Viaarxiv icon

Obstacle-Avoidant Leader Following with a Quadruped Robot

Add code
Oct 01, 2024
Viaarxiv icon

A robust baro-radar-inertial odometry m-estimator for multicopter navigation in cities and forests

Add code
Aug 11, 2024
Figure 1 for A robust baro-radar-inertial odometry m-estimator for multicopter navigation in cities and forests
Figure 2 for A robust baro-radar-inertial odometry m-estimator for multicopter navigation in cities and forests
Figure 3 for A robust baro-radar-inertial odometry m-estimator for multicopter navigation in cities and forests
Figure 4 for A robust baro-radar-inertial odometry m-estimator for multicopter navigation in cities and forests
Viaarxiv icon