Alert button
Picture for Mythra V. Balakuntala

Mythra V. Balakuntala

Alert button

Enhancing Safety of Students with Mobile Air Filtration during School Reopening from COVID-19

Apr 29, 2021
Haoguang Yang, Mythra V. Balakuntala, Abigayle E. Moser, Jhon J. Quiñones, Ali Doosttalab, Antonio Esquivel-Puentes, Tanya Purwar, Luciano Castillo, Nina Mahmoudian, Richard M. Voyles

Figure 1 for Enhancing Safety of Students with Mobile Air Filtration during School Reopening from COVID-19
Figure 2 for Enhancing Safety of Students with Mobile Air Filtration during School Reopening from COVID-19
Figure 3 for Enhancing Safety of Students with Mobile Air Filtration during School Reopening from COVID-19
Figure 4 for Enhancing Safety of Students with Mobile Air Filtration during School Reopening from COVID-19

The paper discusses how robots enable occupant-safe continuous protection for students when schools reopen. Conventionally, fixed air filters are not used as a key pandemic prevention method for public indoor spaces because they are unable to trap the airborne pathogens in time in the entire room. However, by combining the mobility of a robot with air filtration, the efficacy of cleaning up the air around multiple people is largely increased. A disinfection co-robot prototype is thus developed to provide continuous and occupant-friendly protection to people gathering indoors, specifically for students in a classroom scenario. In a static classroom with students sitting in a grid pattern, the mobile robot is able to serve up to 14 students per cycle while reducing the worst-case pathogen dosage by 20%, and with higher robustness compared to a static filter. The extent of robot protection is optimized by tuning the passing distance and speed, such that a robot is able to serve more people given a threshold of worst-case dosage a person can receive.

* Manuscript accepted by 2021 IEEE International Conference on Robotics and Automation (ICRA) 
Viaarxiv icon

Learning Multimodal Contact-Rich Skills from Demonstrations Without Reward Engineering

Mar 01, 2021
Mythra V. Balakuntala, Upinder Kaur, Xin Ma, Juan Wachs, Richard M. Voyles

Figure 1 for Learning Multimodal Contact-Rich Skills from Demonstrations Without Reward Engineering
Figure 2 for Learning Multimodal Contact-Rich Skills from Demonstrations Without Reward Engineering
Figure 3 for Learning Multimodal Contact-Rich Skills from Demonstrations Without Reward Engineering
Figure 4 for Learning Multimodal Contact-Rich Skills from Demonstrations Without Reward Engineering

Everyday contact-rich tasks, such as peeling, cleaning, and writing, demand multimodal perception for effective and precise task execution. However, these present a novel challenge to robots as they lack the ability to combine these multimodal stimuli for performing contact-rich tasks. Learning-based methods have attempted to model multi-modal contact-rich tasks, but they often require extensive training examples and task-specific reward functions which limits their practicality and scope. Hence, we propose a generalizable model-free learning-from-demonstration framework for robots to learn contact-rich skills without explicit reward engineering. We present a novel multi-modal sensor data representation which improves the learning performance for contact-rich skills. We performed training and experiments using the real-life Sawyer robot for three everyday contact-rich skills -- cleaning, writing, and peeling. Notably, the framework achieves a success rate of 100% for the peeling and writing skill, and 80% for the cleaning skill. Hence, this skill learning framework can be extended for learning other physical manipulation skills.

* Submitted to IEEE ICRA 2021. This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible 
Viaarxiv icon

Extending Policy from One-Shot Learning through Coaching

May 13, 2019
Mythra V. Balakuntala, Vishnunandan L. N. Venkatesh, Jyothsna Padmakumar Bindu, Richard M. Voyles, Juan Wachs

Figure 1 for Extending Policy from One-Shot Learning through Coaching
Figure 2 for Extending Policy from One-Shot Learning through Coaching
Figure 3 for Extending Policy from One-Shot Learning through Coaching
Figure 4 for Extending Policy from One-Shot Learning through Coaching

Humans generally teach their fellow collaborators to perform tasks through a small number of demonstrations. The learnt task is corrected or extended to meet specific task goals by means of coaching. Adopting a similar framework for teaching robots through demonstrations and coaching makes teaching tasks highly intuitive. Unlike traditional Learning from Demonstration (LfD) approaches which require multiple demonstrations, we present a one-shot learning from demonstration approach to learn tasks. The learnt task is corrected and generalized using two layers of evaluation/modification. First, the robot self-evaluates its performance and corrects the performance to be closer to the demonstrated task. Then, coaching is used as a means to extend the policy learnt to be adaptable to varying task goals. Both the self-evaluation and coaching are implemented using reinforcement learning (RL) methods. Coaching is achieved through human feedback on desired goal and action modification to generalize to specified task goals. The proposed approach is evaluated with a scooping task, by presenting a single demonstration. The self-evaluation framework aims to reduce the resistance to scooping in the media. To reduce the search space for RL, we bootstrap the search using least resistance path obtained using resistive force theory. Coaching is used to generalize the learnt task policy to transfer the desired quantity of material. Thus, the proposed method provides a framework for learning tasks from one demonstration and generalizing it using human feedback through coaching.

Viaarxiv icon

Self-Evaluation in One-Shot Learning from Demonstration of Contact-Intensive Tasks

Apr 03, 2019
Mythra V. Balakuntala, L. N. Vishnunandan Venkatesh, Jyothsna Padmakumar Bindu, Richard M. Voyles

Figure 1 for Self-Evaluation in One-Shot Learning from Demonstration of Contact-Intensive Tasks
Figure 2 for Self-Evaluation in One-Shot Learning from Demonstration of Contact-Intensive Tasks
Figure 3 for Self-Evaluation in One-Shot Learning from Demonstration of Contact-Intensive Tasks
Figure 4 for Self-Evaluation in One-Shot Learning from Demonstration of Contact-Intensive Tasks

Humans naturally "program" a fellow collaborator to perform a task by demonstrating the task few times. It is intuitive, therefore, for a human to program a collaborative robot by demonstration and many paradigms use a single demonstration of the task. This is a form of one-shot learning in which a single training example, plus some context of the task, is used to infer a model of the task for subsequent execution and later refinement. This paper presents a one-shot learning from demonstration framework to learn contact-intensive tasks using only visual perception of the demonstrated task. The robot learns a policy for performing the tasks in terms of a priori skills and further uses self-evaluation based on visual and tactile perception of the skill performance to learn the force correspondences for the skills. The self-evaluation is performed based on goal states detected in the demonstration with the help of task context and the skill parameters are tuned using reinforcement learning. This approach enables the robot to learn force correspondences which cannot be inferred from a visual demonstration of the task. The effectiveness of this approach is evaluated using a vegetable peeling task.

Viaarxiv icon

DESK: A Robotic Activity Dataset for Dexterous Surgical Skills Transfer to Medical Robots

Mar 03, 2019
Naveen Madapana, Md Masudur Rahman, Natalia Sanchez-Tamayo, Mythra V. Balakuntala, Glebys Gonzalez, Jyothsna Padmakumar Bindu, L. N. Vishnunandan Venkatesh, Xingguang Zhang, Juan Barragan Noguera, Thomas Low, Richard Voyles, Yexiang Xue, Juan Wachs

Figure 1 for DESK: A Robotic Activity Dataset for Dexterous Surgical Skills Transfer to Medical Robots
Figure 2 for DESK: A Robotic Activity Dataset for Dexterous Surgical Skills Transfer to Medical Robots
Figure 3 for DESK: A Robotic Activity Dataset for Dexterous Surgical Skills Transfer to Medical Robots
Figure 4 for DESK: A Robotic Activity Dataset for Dexterous Surgical Skills Transfer to Medical Robots

Datasets are an essential component for training effective machine learning models. In particular, surgical robotic datasets have been key to many advances in semi-autonomous surgeries, skill assessment, and training. Simulated surgical environments can enhance the data collection process by making it faster, simpler and cheaper than real systems. In addition, combining data from multiple robotic domains can provide rich and diverse training data for transfer learning algorithms. In this paper, we present the DESK (Dexterous Surgical Skill) dataset. It comprises a set of surgical robotic skills collected during a surgical training task using three robotic platforms: the Taurus II robot, Taurus II simulated robot, and the YuMi robot. This dataset was used to test the idea of transferring knowledge across different domains (e.g. from Taurus to YuMi robot) for a surgical gesture classification task with seven gestures. We explored three different scenarios: 1) No transfer, 2) Transfer from simulated Taurus to real Taurus and 3) Transfer from Simulated Taurus to the YuMi robot. We conducted extensive experiments with three supervised learning models and provided baselines in each of these scenarios. Results show that using simulation data during training enhances the performance on the real robot where limited real data is available. In particular, we obtained an accuracy of 55% on the real Taurus data using a model that is trained only on the simulator data. Furthermore, we achieved an accuracy improvement of 34% when 3% of the real data is added into the training process.

* 8 pages, 5 figures, 4 tables, submitted to IROS 2019 conference 
Viaarxiv icon