Alert button
Picture for Lorenzo Natale

Lorenzo Natale

Alert button

iCub Detecting Gazed Objects: A Pipeline Estimating Human Attention

Aug 25, 2023
Shiva Hanifi, Elisa Maiettini, Maria Lombardi, Lorenzo Natale

This paper explores the role of eye gaze in human-robot interactions and proposes a novel system for detecting objects gazed by the human using solely visual feedback. The system leverages on face detection, human attention prediction, and online object detection, and it allows the robot to perceive and interpret human gaze accurately, paving the way for establishing joint attention with human partners. Additionally, a novel dataset collected with the humanoid robot iCub is introduced, comprising over 22,000 images from ten participants gazing at different annotated objects. This dataset serves as a benchmark for evaluating the performance of the proposed pipeline. The paper also includes an experimental analysis of the pipeline's effectiveness in a human-robot interaction setting, examining the performance of each component. Furthermore, the developed system is deployed on the humanoid robot iCub, and a supplementary video showcases its functionality. The results demonstrate the potential of the proposed approach to enhance social awareness and responsiveness in social robotics, as well as improve assistance and support in collaborative scenarios, promoting efficient human-robot collaboration. The code and the collected dataset will be released upon acceptance.

Viaarxiv icon

A Grasp Pose is All You Need: Learning Multi-fingered Grasping with Deep Reinforcement Learning from Vision and Touch

Jun 06, 2023
Federico Ceola, Elisa Maiettini, Lorenzo Rosasco, Lorenzo Natale

Figure 1 for A Grasp Pose is All You Need: Learning Multi-fingered Grasping with Deep Reinforcement Learning from Vision and Touch
Figure 2 for A Grasp Pose is All You Need: Learning Multi-fingered Grasping with Deep Reinforcement Learning from Vision and Touch
Figure 3 for A Grasp Pose is All You Need: Learning Multi-fingered Grasping with Deep Reinforcement Learning from Vision and Touch
Figure 4 for A Grasp Pose is All You Need: Learning Multi-fingered Grasping with Deep Reinforcement Learning from Vision and Touch

Multi-fingered robotic hands could enable robots to perform sophisticated manipulation tasks. However, teaching a robot to grasp objects with an anthropomorphic hand is an arduous problem due to the high dimensionality of state and action spaces. Deep Reinforcement Learning (DRL) offers techniques to design control policies for this kind of problems without explicit environment or hand modeling. However, training these policies with state-of-the-art model-free algorithms is greatly challenging for multi-fingered hands. The main problem is that an efficient exploration of the environment is not possible for such high-dimensional problems, thus causing issues in the initial phases of policy optimization. One possibility to address this is to rely on off-line task demonstrations. However, oftentimes this is incredibly demanding in terms of time and computational resources. In this work, we overcome these requirements and propose the A Grasp Pose is All You Need (G-PAYN) method for the anthropomorphic hand of the iCub humanoid. We develop an approach to automatically collect task demonstrations to initialize the training of the policy. The proposed grasping pipeline starts from a grasp pose generated by an external algorithm, used to initiate the movement. Then a control policy (previously trained with the proposed G-PAYN) is used to reach and grab the object. We deployed the iCub into the MuJoCo simulator and use it to test our approach with objects from the YCB-Video dataset. The results show that G-PAYN outperforms current DRL techniques in the considered setting, in terms of success rate and execution time with respect to the baselines. The code to reproduce the experiments will be released upon acceptance.

* Submitted to IROS 2023 
Viaarxiv icon

Self-improving object detection via disagreement reconciliation

Feb 21, 2023
Gianluca Scarpellini, Stefano Rosa, Pietro Morerio, Lorenzo Natale, Alessio Del Bue

Figure 1 for Self-improving object detection via disagreement reconciliation
Figure 2 for Self-improving object detection via disagreement reconciliation
Figure 3 for Self-improving object detection via disagreement reconciliation
Figure 4 for Self-improving object detection via disagreement reconciliation

Object detectors often experience a drop in performance when new environmental conditions are insufficiently represented in the training data. This paper studies how to automatically fine-tune a pre-existing object detector while exploring and acquiring images in a new environment without relying on human intervention, i.e., in a self-supervised fashion. In our setting, an agent initially explores the environment using a pre-trained off-the-shelf detector to locate objects and associate pseudo-labels. By assuming that pseudo-labels for the same object must be consistent across different views, we devise a novel mechanism for producing refined predictions from the consensus among observations. Our approach improves the off-the-shelf object detector by 2.66% in terms of mAP and outperforms the current state of the art without relying on ground-truth annotations.

* This article is a conference paper related to arXiv:2302.03566 and is currently under review 
Viaarxiv icon

Key Design Choices for Double-Transfer in Source-Free Unsupervised Domain Adaptation

Feb 10, 2023
Andrea Maracani, Raffaello Camoriano, Elisa Maiettini, Davide Talon, Lorenzo Rosasco, Lorenzo Natale

Figure 1 for Key Design Choices for Double-Transfer in Source-Free Unsupervised Domain Adaptation
Figure 2 for Key Design Choices for Double-Transfer in Source-Free Unsupervised Domain Adaptation
Figure 3 for Key Design Choices for Double-Transfer in Source-Free Unsupervised Domain Adaptation
Figure 4 for Key Design Choices for Double-Transfer in Source-Free Unsupervised Domain Adaptation

Fine-tuning and Domain Adaptation emerged as effective strategies for efficiently transferring deep learning models to new target tasks. However, target domain labels are not accessible in many real-world scenarios. This led to the development of Unsupervised Domain Adaptation (UDA) methods, which only employ unlabeled target samples. Furthermore, efficiency and privacy requirements may also prevent the use of source domain data during the adaptation stage. This challenging setting, known as Source-Free Unsupervised Domain Adaptation (SF-UDA), is gaining interest among researchers and practitioners due to its potential for real-world applications. In this paper, we provide the first in-depth analysis of the main design choices in SF-UDA through a large-scale empirical study across 500 models and 74 domain pairs. We pinpoint the normalization approach, pre-training strategy, and backbone architecture as the most critical factors. Based on our quantitative findings, we propose recipes to best tackle SF-UDA scenarios. Moreover, we show that SF-UDA is competitive also beyond standard benchmarks and backbone architectures, performing on par with UDA at a fraction of the data and computational cost. In the interest of reproducibility, we include the full experimental results and code as supplementary material.

Viaarxiv icon

Look around and learn: self-improving object detection by exploration

Feb 10, 2023
Gianluca Scarpellini, Stefano Rosa, Pietro Morerio, Lorenzo Natale, Alessio Del Bue

Figure 1 for Look around and learn: self-improving object detection by exploration
Figure 2 for Look around and learn: self-improving object detection by exploration
Figure 3 for Look around and learn: self-improving object detection by exploration
Figure 4 for Look around and learn: self-improving object detection by exploration

Object detectors often experience a drop in performance when new environmental conditions are insufficiently represented in the training data. This paper studies how to automatically fine-tune a pre-existing object detector while exploring and acquiring images in a new environment without relying on human intervention, i.e., in an utterly self-supervised fashion. In our setting, an agent initially learns to explore the environment using a pre-trained off-the-shelf detector to locate objects and associate pseudo-labels. By assuming that pseudo-labels for the same object must be consistent across different views, we learn an exploration policy mining hard samples and we devise a novel mechanism for producing refined predictions from the consensus among observations. Our approach outperforms the current state-of-the-art, and it closes the performance gap against a fully supervised setting without relying on ground-truth annotations. We also compare various exploration policies for the agent to gather more informative observations. Code and dataset will be made available upon paper acceptance

Viaarxiv icon

Collision-aware In-hand 6D Object Pose Estimation using Multiple Vision-based Tactile Sensors

Jan 31, 2023
Gabriele M. Caddeo, Nicola A. Piga, Fabrizio Bottarel, Lorenzo Natale

Figure 1 for Collision-aware In-hand 6D Object Pose Estimation using Multiple Vision-based Tactile Sensors
Figure 2 for Collision-aware In-hand 6D Object Pose Estimation using Multiple Vision-based Tactile Sensors
Figure 3 for Collision-aware In-hand 6D Object Pose Estimation using Multiple Vision-based Tactile Sensors
Figure 4 for Collision-aware In-hand 6D Object Pose Estimation using Multiple Vision-based Tactile Sensors

In this paper, we address the problem of estimating the in-hand 6D pose of an object in contact with multiple vision-based tactile sensors. We reason on the possible spatial configurations of the sensors along the object surface. Specifically, we filter contact hypotheses using geometric reasoning and a Convolutional Neural Network (CNN), trained on simulated object-agnostic images, to promote those that better comply with the actual tactile images from the sensors. We use the selected sensors configurations to optimize over the space of 6D poses using a Gradient Descent-based approach. We finally rank the obtained poses by penalizing those that are in collision with the sensors. We carry out experiments in simulation using the DIGIT vision-based sensor with several objects, from the standard YCB model set. The results demonstrate that our approach estimates object poses that are compatible with actual object-sensor contacts in $87.5\%$ of cases while reaching an average positional error in the order of $2$ centimeters. Our analysis also includes qualitative results of experiments with a real DIGIT sensor.

* Accepted for publication at 2023 IEEE International Conference on Robotics and Automation (ICRA) 
Viaarxiv icon

Towards Confidence-guided Shape Completion for Robotic Applications

Sep 09, 2022
Andrea Rosasco, Stefano Berti, Fabrizio Bottarel, Michele Colledanchise, Lorenzo Natale

Figure 1 for Towards Confidence-guided Shape Completion for Robotic Applications
Figure 2 for Towards Confidence-guided Shape Completion for Robotic Applications
Figure 3 for Towards Confidence-guided Shape Completion for Robotic Applications
Figure 4 for Towards Confidence-guided Shape Completion for Robotic Applications

Many robotic tasks involving some form of 3D visual perception greatly benefit from a complete knowledge of the working environment. However, robots often have to tackle unstructured environments and their onboard visual sensors can only provide incomplete information due to limited workspaces, clutter or object self-occlusion. In recent years, deep learning architectures for shape completion have begun taking traction as effective means of inferring a complete 3D object representation from partial visual data. Nevertheless, most of the existing state-of-the-art approaches provide a fixed output resolution in the form of voxel grids, strictly related to the size of the neural network output stage. While this is enough for some tasks, e.g. obstacle avoidance in navigation, grasping and manipulation require finer resolutions and simply scaling up the neural network outputs is computationally expensive. In this paper, we address this limitation by proposing an object shape completion method based on an implicit 3D representation providing a confidence value for each reconstructed point. As a second contribution, we propose a gradient-based method for efficiently sampling such implicit function at an arbitrary resolution, tunable at inference time. We experimentally validate our approach by comparing reconstructed shapes with ground truths, and by deploying our shape completion algorithm in a robotic grasping pipeline. In both cases, we compare results with a state-of-the-art shape completion approach.

Viaarxiv icon

One-Shot Open-Set Skeleton-Based Action Recognition

Sep 09, 2022
Stefano Berti, Andrea Rosasco, Michele Colledanchise, Lorenzo Natale

Figure 1 for One-Shot Open-Set Skeleton-Based Action Recognition
Figure 2 for One-Shot Open-Set Skeleton-Based Action Recognition
Figure 3 for One-Shot Open-Set Skeleton-Based Action Recognition
Figure 4 for One-Shot Open-Set Skeleton-Based Action Recognition

Action recognition is a fundamental capability for humanoid robots to interact and cooperate with humans. This application requires the action recognition system to be designed so that new actions can be easily added, while unknown actions are identified and ignored. In recent years, deep-learning approaches represented the principal solution to the Action Recognition problem. However, most models often require a large dataset of manually-labeled samples. In this work we target One-Shot deep-learning models, because they can deal with just a single instance for class. Unfortunately, One-Shot models assume that, at inference time, the action to recognize falls into the support set and they fail when the action lies outside the support set. Few-Shot Open-Set Recognition (FSOSR) solutions attempt to address that flaw, but current solutions consider only static images and not sequences of images. Static images remain insufficient to discriminate actions such as sitting-down and standing-up. In this paper we propose a novel model that addresses the FSOSR problem with a One-Shot model that is augmented with a discriminator that rejects unknown actions. This model is useful for applications in humanoid robotics, because it allows to easily add new classes and determine whether an input sequence is among the ones that are known to the system. We show how to train the whole model in an end-to-end fashion and we perform quantitative and qualitative analyses. Finally, we provide real-world examples.

Viaarxiv icon

iCub Being Social: Exploiting Social Cues for Interactive Object Detection Learning

Jul 27, 2022
Maria Lombardi, Elisa Maiettini, Vadim Tikhanoff, Lorenzo Natale

Figure 1 for iCub Being Social: Exploiting Social Cues for Interactive Object Detection Learning
Figure 2 for iCub Being Social: Exploiting Social Cues for Interactive Object Detection Learning
Figure 3 for iCub Being Social: Exploiting Social Cues for Interactive Object Detection Learning
Figure 4 for iCub Being Social: Exploiting Social Cues for Interactive Object Detection Learning

Performing joint interaction requires constant mutual monitoring of own actions and their effects on the other's behaviour. Such an action-effect monitoring is boosted by social cues and might result in an increasing sense of agency. Joint actions and joint attention are strictly correlated and both of them contribute to the formation of a precise temporal coordination. In human-robot interaction, the robot's ability to establish joint attention with a human partner and exploit various social cues to react accordingly is a crucial step in creating communicative robots. Along the social component, an effective human-robot interaction can be seen as a new method to improve and make the robot's learning process more natural and robust for a given task. In this work we use different social skills, such as mutual gaze, gaze following, speech and human face recognition, to develop an effective teacher-learner scenario tailored to visual object learning in dynamic environments. Experiments on the iCub robot demonstrate that the system allows the robot to learn new objects through a natural interaction with a human teacher in presence of distractors.

Viaarxiv icon