In the realm of autonomous robotics, a critical challenge lies in developing robust solutions for Active Collaborative SLAM, wherein multiple robots must collaboratively explore and map an unknown environment while intelligently coordinating their movements and sensor data acquisitions. To this aim, we present two approaches for coordinating a system consisting of multiple robots to perform Active Collaborative SLAM (AC-SLAM) for environmental exploration. Our two coordination approaches, synchronous and asynchronous implement a methodology to prioritize robot goal assignments by the central server. We also present a method to efficiently spread the robots for maximum exploration while keeping SLAM uncertainty low. Both coordination approaches were evaluated through simulation on publicly available datasets, obtaining promising results.
The article introduces the concept of "diversity-aware" robotics and discusses the need to develop computational models to embed robots with diversity-awareness: that is, robots capable of adapting and re-configuring their behavior to recognize, respect, and value the uniqueness of the person they interact with to promote inclusion regardless of their age, race, gender, cognitive or physical capabilities, etc. Finally, the article discusses possible technical solutions based on Ontologies and Bayesian Networks, starting from previous experience with culturally competent robots.
This article presents the design and the implementation of CAIR: a cloud system for knowledge-based autonomous interaction devised for Social Robots and other conversational agents. The system is particularly convenient for low-cost robots and devices. Developers are provided with a sustainable solution to manage verbal and non-verbal interaction through a network connection, with about 3,000 topics of conversation ready for "chit-chatting" and a library of pre-cooked plans that only needs to be grounded into the robot's physical capabilities. The system is structured as a set of REST API endpoints so that it can be easily expanded by adding new APIs to improve the capabilities of the clients connected to the cloud. Another key feature of the system is that it has been designed to make the development of its clients straightforward: in this way, multiple devices can be easily endowed with the capability of autonomously interacting with the user, understanding when to perform specific actions, and exploiting all the information provided by cloud services. The article outlines and discusses the results of the experiments performed to assess the system's performance in terms of response time, paving the way for its use both for research and market solutions. Links to repositories with clients for ROS and popular robots such as Pepper and NAO are given.
Since the demand for renewable solar energy is continuously growing, the need for more frequent, precise, and quick autonomous aerial inspections using Unmanned Aerial Vehicles (UAV) may become fundamental to reduce costs. However, UAV-based inspection of Photovoltaic (PV) arrays is still an open problem. Companies in the field complain that GPS-based navigation is not adequate to accurately cover PV arrays to acquire images to be analyzed to determine the PV panels' status. Indeed, when instructing UAVs to move along a sequency of waypoints at a low altitude, two sources of errors may deteriorate performances: (i) the difference between the actual UAV position and the one estimated with the GPS, and (ii) the difference between the UAV position returned by the GPS and the position of waypoints extracted from georeferenced images acquired through Google Earth or similar tools. These errors make it impossible to reliably track rows of PV modules without human intervention reliably. The article proposes an approach for inspecting PV arrays with autonomous UAVs equipped with an RGB and a thermal camera, the latter being typically used to detect heat failures on the panels' surface: we introduce a portfolio of techniques to process data from both cameras for autonomous navigation. %, including an optimization procedure for improving panel detection and an Extended Kalman Filter (EKF) to filter data from RGB and thermal cameras. Experimental tests performed in simulation and an actual PV plant are reported, confirming the validity of the approach.
This article introduces the concept of image "culturization", i.e., defined as the process of altering the "brushstroke of cultural features" that make objects perceived as belonging to a given culture while preserving their functionalities. First, we propose a pipeline for translating objects' images from a source to a target cultural domain based on Generative Adversarial Networks (GAN). Then, we gather data through an online questionnaire to test four hypotheses concerning the preferences of Italian participants towards objects and environments belonging to different cultures. As expected, results depend on individual tastes and preference: however, they are in line with our conjecture that some people, during the interaction with a robot or another intelligent system, might prefer to be shown images whose cultural domain has been modified to match their cultural background.
The article proposes a system for knowledge-based conversation designed for Social Robots and other conversational agents. The proposed system relies on an Ontology for the description of all concepts that may be relevant conversation topics, as well as their mutual relationships. The article focuses on the algorithm for Dialogue Management that selects the most appropriate conversation topic depending on the user's input. Moreover, it discusses strategies to ensure a conversation flow that captures, as more coherently as possible, the user's intention to drive the conversation in specific directions while avoiding purely reactive responses to what the user says. To measure the quality of the conversation, the article reports the tests performed with 100 recruited participants, comparing five conversational agents: (i) an agent addressing dialogue flow management based only on the detection of keywords in the speech, (ii) an agent based both on the detection of keywords and the Content Classification feature of Google Cloud Natural Language, (iii) an agent that picks conversation topics randomly, (iv) a human pretending to be a chatbot, and (v) one of the most famous chatbots worldwide: Replika. The subjective perception of the participants is measured both with the SASSI (Subjective Assessment of Speech System Interfaces) tool, as well as with a custom survey for measuring the subjective perception of coherence.
This article describes a novel approach to expand in run-time the knowledge base of an Artificial Conversational Agent. A technique for automatic knowledge extraction from the user's sentence and four methods to insert the new acquired concepts in the knowledge base have been developed and integrated into a system that has already been tested for knowledge-based conversation between a social humanoid robot and residents of care homes. The run-time addition of new knowledge allows overcoming some limitations that affect most robots and chatbots: the incapability of engaging the user for a long time due to the restricted number of conversation topics. The insertion in the knowledge base of new concepts recognized in the user's sentence is expected to result in a wider range of topics that can be covered during an interaction, making the conversation less repetitive. Two experiments are presented to assess the performance of the knowledge extraction technique, and the efficiency of the developed insertion methods when adding several concepts in the Ontology.
Robots, along with sensors and telemedicine, have been identified as technologies that can assist and prolong independent living for older people, with robots especially being used to help prevent social isolation and depression.
Cultural adaptation, i.e., the matching of a robot's behaviours to the cultural norms and preferences of its user, is a well known key requirement for the success of any assistive application. However, culture-dependent robot behaviours are often implicitly set by designers, thus not allowing for an easy and automatic adaptation to different cultures. This paper presents a method for the design of culture-aware robots, that can automatically adapt their behaviour to conform to a given culture. We propose a mapping from cultural factors to related parameters of robot behaviours which relies on linguistic variables to encode heterogeneous cultural factors in a uniform formalism, and on fuzzy rules to encode qualitative relations among multiple variables. We illustrate the approach in two practical case studies.