Research has shown that Educational Robotics (ER) enhances student performance, interest, engagement and collaboration. However, until now, the adoption of robotics in formal education has remained relatively scarce. Among other causes, this is due to the difficulty of determining the alignment of educational robotic learning activities with the learning outcomes envisioned by the curriculum, as well as their integration with traditional, non-robotics learning activities that are well established in teachers' practices. This work investigates the integration of ER into formal mathematics education, through a quasi-experimental study employing the Thymio robot and Scratch programming to teach geometry to two classes of 15-year-old students, for a total of 26 participants. Three research questions were addressed: (1) Should an ER-based theoretical lecture precede, succeed or replace a traditional theoretical lecture? (2) What is the students' perception of and engagement in the ER-based lecture and exercises? (3) Do the findings differ according to students' prior appreciation of mathematics? The results suggest that ER activities are as valid as traditional ones in helping students grasp the relevant theoretical concepts. Robotics activities seem particularly beneficial during exercise sessions: students freely chose to do exercises that included the robot, rated them as significantly more interesting and useful than their traditional counterparts, and expressed their interest in introducing ER in other mathematics lectures. Finally, results were generally consistent between the students that like and did not like mathematics, suggesting the use of robotics as a means to broaden the number of students engaged in the discipline.
With the introduction of educational robotics (ER) and computational thinking (CT) in classrooms, there is a rising need for operational models that help ensure that CT skills are adequately developed. One such model is the Creative Computational Problem Solving Model (CCPS) which can be employed to improve the design of ER learning activities. Following the first validation with students, the objective of the present study is to validate the model with teachers, specifically considering how they may employ the model in their own practices. The Utility, Usability and Acceptability framework was leveraged for the evaluation through a survey analysis with 334 teachers. Teachers found the CCPS model useful to foster transversal skills but could not recognise the impact of specific intervention methods on CT-related cognitive processes. Similarly, teachers perceived the model to be usable for activity design and intervention, although felt unsure about how to use it to assess student learning and adapt their teaching accordingly. Finally, the teachers accepted the model, as shown by their intent to replicate the activity in their classrooms, but were less willing to modify it or create their own activities, suggesting that they need time to appropriate the model and underlying tenets.
Recently, introducing computer science and educational robots in compulsory education has received increasing attention. However, the use of screens in classrooms is often met with resistance, especially in primary school. To address this issue, this study presents the development of a handwriting-based programming language for educational robots. Aiming to align better with existing classroom practices, it allows students to program a robot by drawing symbols with ordinary pens and paper. Regular smartphones are leveraged to process the hand-drawn instructions using computer vision and machine learning algorithms, and send the commands to the robot for execution. To align with the local computer science curriculum, an appropriate playground and scaffolded learning tasks were designed. The system was evaluated in a preliminary test with eight teachers, developers and educational researchers. While the participants pointed out that some technical aspects could be improved, they also acknowledged the potential of the approach to make computer science education in primary school more accessible.
A dialogue is successful when there is alignment between the speakers, at different linguistic levels. In this work, we consider the dialogue occurring between interlocutors engaged in a collaborative learning task, and explore how performance and learning (i.e. task success) relate to dialogue alignment processes. The main contribution of this work is to propose new measures to automatically study alignment, to consider completely spontaneous spoken dialogues among children in the context of a collaborative learning activity. Our measures of alignment consider the children's use of expressions that are related to the task at hand, their follow-up actions of these expressions, and how it links to task success. Focusing on expressions related to the task gives us insight into the way children use (potentially unfamiliar) terminology related to the task. A first finding of this work is the discovery that the measures we propose can capture elements of lexical alignment in such a context. Through these measures, we find that teams with bad performance often aligned too late in the dialogue to achieve task success, and that they were late to follow up each other's instructions with actions. We also found that while interlocutors do not exhibit hesitation phenomena (which we measure by looking at fillers) in introducing expressions pertaining to the task, they do exhibit hesitation before accepting the expression, in the role of clarification. Lastly, we show that information management markers (measured by the discourse marker 'oh') occur in the general vicinity of the follow up actions from (automatically) inferred instructions. However, good performers tend to have this marker closer to these actions. Our measures still reflect some fine-grained aspects of learning in the dialogue, even if we cannot conclude that overall they are linked to the final measure of learning.
Gestures are a natural communication modality for humans. The ability to interpret gestures is fundamental for robots aiming to naturally interact with humans. Wearable sensors are promising to monitor human activity, in particular the usage of triaxial accelerometers for gesture recognition have been explored. Despite this, the state of the art presents lack of systems for reliable online gesture recognition using accelerometer data. The article proposes SLOTH, an architecture for online gesture recognition, based on a wearable triaxial accelerometer, a Recurrent Neural Network (RNN) probabilistic classifier and a procedure for continuous gesture detection, relying on modelling gesture probabilities, that guarantees (i) good recognition results in terms of precision and recall, (ii) immediate system reactivity.
Cultural adaptation, i.e., the matching of a robot's behaviours to the cultural norms and preferences of its user, is a well known key requirement for the success of any assistive application. However, culture-dependent robot behaviours are often implicitly set by designers, thus not allowing for an easy and automatic adaptation to different cultures. This paper presents a method for the design of culture-aware robots, that can automatically adapt their behaviour to conform to a given culture. We propose a mapping from cultural factors to related parameters of robot behaviours which relies on linguistic variables to encode heterogeneous cultural factors in a uniform formalism, and on fuzzy rules to encode qualitative relations among multiple variables. We illustrate the approach in two practical case studies.
Cultural competence is a well known requirement for an effective healthcare, widely investigated in the nursing literature. We claim that personal assistive robots should likewise be culturally competent, aware of general cultural characteristics and of the different forms they take in different individuals, and sensitive to cultural differences while perceiving, reasoning, and acting. Drawing inspiration from existing guidelines for culturally competent healthcare and the state-of-the-art in culturally competent robotics, we identify the key robot capabilities which enable culturally competent behaviours and discuss methodologies for their development and evaluation.
Daily life activities, such as eating and sleeping, are deeply influenced by a person's culture, hence generating differences in the way a same activity is performed by individuals belonging to different cultures. We argue that taking cultural information into account can improve the performance of systems for the automated recognition of human activities. We propose four different solutions to the problem and present a system which uses a Naive Bayes model to associate cultural information with semantic information extracted from still images. Preliminary experiments with a dataset of images of individuals lying on the floor, sleeping on a futon and sleeping on a bed suggest that: i) solutions explicitly taking cultural information into account are more accurate than culture-unaware solutions; and ii) the proposed system is a promising starting point for the development of culture-aware Human Activity Recognition methods.
Providing elderly and people with special needs, including those suffering from physical disabilities and chronic diseases, with the possibility of retaining their independence at best is one of the most important challenges our society is expected to face. Assistance models based on the home care paradigm are being adopted rapidly in almost all industrialized and emerging countries. Such paradigms hypothesize that it is necessary to ensure that the so-called Activities of Daily Living are correctly and regularly performed by the assisted person to increase the perception of an improved quality of life. This chapter describes the computational inference engine at the core of Arianna, a system able to understand whether an assisted person performs a given set of ADL and to motivate him/her in performing them through a speech-mediated motivational dialogue, using a set of nearables to be installed in an apartment, plus a wearable to be worn or fit in garments.
The nursing literature shows that cultural competence is an important requirement for effective healthcare. We claim that personal assistive robots should likewise be culturally competent, that is, they should be aware of general cultural characteristics and of the different forms they take in different individuals, and take these into account while perceiving, reasoning, and acting. The CARESSES project is an Europe-Japan collaborative effort that aims at designing, developing and evaluating culturally competent assistive robots. These robots will be able to adapt the way they behave, speak and interact to the cultural identity of the person they assist. This paper describes the approach taken in the CARESSES project, its initial steps, and its future plans.