Alert button
Picture for Diego Dall'Alba

Diego Dall'Alba

Alert button

Autonomous Navigation for Robot-assisted Intraluminal and Endovascular Procedures: A Systematic Review

May 06, 2023
Ameya Pore, Zhen Li, Diego Dall'Alba, Albert Hernansanz, Elena De Momi, Arianna Menciassi, Alicia Casals, Jenny Denkelman, Paolo Fiorini, Emmanuel Vander Poorten

Figure 1 for Autonomous Navigation for Robot-assisted Intraluminal and Endovascular Procedures: A Systematic Review
Figure 2 for Autonomous Navigation for Robot-assisted Intraluminal and Endovascular Procedures: A Systematic Review
Figure 3 for Autonomous Navigation for Robot-assisted Intraluminal and Endovascular Procedures: A Systematic Review
Figure 4 for Autonomous Navigation for Robot-assisted Intraluminal and Endovascular Procedures: A Systematic Review

Increased demand for less invasive procedures has accelerated the adoption of Intraluminal Procedures (IP) and Endovascular Interventions (EI) performed through body lumens and vessels. As navigation through lumens and vessels is quite complex, interest grows to establish autonomous navigation techniques for IP and EI for reaching the target area. Current research efforts are directed toward increasing the Level of Autonomy (LoA) during the navigation phase. One key ingredient for autonomous navigation is Motion Planning (MP) techniques. This paper provides an overview of MP techniques categorizing them based on LoA. Our analysis investigates advances for the different clinical scenarios. Through a systematic literature analysis using the PRISMA method, the study summarizes relevant works and investigates the clinical aim, LoA, adopted MP techniques, and validation types. We identify the limitations of the corresponding MP methods and provide directions to improve the robustness of the algorithms in dynamic intraluminal environments. MP for IP and EI can be classified into four subgroups: node, sampling, optimization, and learning-based techniques, with a notable rise in learning-based approaches in recent years. One of the review's contributions is the identification of the limiting factors in IP and EI robotic systems hindering higher levels of autonomous navigation. In the future, navigation is bound to become more autonomous, placing the clinician in a supervisory position to improve control precision and reduce workload.

* 31 pages, 7 figures, 3 tables; Accepted in IEEE Transactions on Robotics 
Viaarxiv icon

Constrained Reinforcement Learning and Formal Verification for Safe Colonoscopy Navigation

Mar 16, 2023
Davide Corsi, Luca Marzari, Ameya Pore, Alessandro Farinelli, Alicia Casals, Paolo Fiorini, Diego Dall'Alba

Figure 1 for Constrained Reinforcement Learning and Formal Verification for Safe Colonoscopy Navigation
Figure 2 for Constrained Reinforcement Learning and Formal Verification for Safe Colonoscopy Navigation
Figure 3 for Constrained Reinforcement Learning and Formal Verification for Safe Colonoscopy Navigation
Figure 4 for Constrained Reinforcement Learning and Formal Verification for Safe Colonoscopy Navigation

The field of robotic Flexible Endoscopes (FEs) has progressed significantly, offering a promising solution to reduce patient discomfort. However, the limited autonomy of most robotic FEs results in non-intuitive and challenging manoeuvres, constraining their application in clinical settings. While previous studies have employed lumen tracking for autonomous navigation, they fail to adapt to the presence of obstructions and sharp turns when the endoscope faces the colon wall. In this work, we propose a Deep Reinforcement Learning (DRL)-based navigation strategy that eliminates the need for lumen tracking. However, the use of DRL methods poses safety risks as they do not account for potential hazards associated with the actions taken. To ensure safety, we exploit a Constrained Reinforcement Learning (CRL) method to restrict the policy in a predefined safety regime. Moreover, we present a model selection strategy that utilises Formal Verification (FV) to choose a policy that is entirely safe before deployment. We validate our approach in a virtual colonoscopy environment and report that out of the 300 trained policies, we could identify three policies that are entirely safe. Our work demonstrates that CRL, combined with model selection through FV, can improve the robustness and safety of robotic behaviour in surgical applications.

* Corsi, Marzari and Pore contributed equally 
Viaarxiv icon

Weakly Supervised Temporal Convolutional Networks for Fine-grained Surgical Activity Recognition

Feb 21, 2023
Sanat Ramesh, Diego Dall'Alba, Cristians Gonzalez, Tong Yu, Pietro Mascagni, Didier Mutter, Jacques Marescaux, Paolo Fiorini, Nicolas Padoy

Figure 1 for Weakly Supervised Temporal Convolutional Networks for Fine-grained Surgical Activity Recognition
Figure 2 for Weakly Supervised Temporal Convolutional Networks for Fine-grained Surgical Activity Recognition
Figure 3 for Weakly Supervised Temporal Convolutional Networks for Fine-grained Surgical Activity Recognition
Figure 4 for Weakly Supervised Temporal Convolutional Networks for Fine-grained Surgical Activity Recognition

Automatic recognition of fine-grained surgical activities, called steps, is a challenging but crucial task for intelligent intra-operative computer assistance. The development of current vision-based activity recognition methods relies heavily on a high volume of manually annotated data. This data is difficult and time-consuming to generate and requires domain-specific knowledge. In this work, we propose to use coarser and easier-to-annotate activity labels, namely phases, as weak supervision to learn step recognition with fewer step annotated videos. We introduce a step-phase dependency loss to exploit the weak supervision signal. We then employ a Single-Stage Temporal Convolutional Network (SS-TCN) with a ResNet-50 backbone, trained in an end-to-end fashion from weakly annotated videos, for temporal activity segmentation and recognition. We extensively evaluate and show the effectiveness of the proposed method on a large video dataset consisting of 40 laparoscopic gastric bypass procedures and the public benchmark CATARACTS containing 50 cataract surgeries.

Viaarxiv icon

CIPCaD-Bench: Continuous Industrial Process datasets for benchmarking Causal Discovery methods

Aug 02, 2022
Giovanni Menegozzo, Diego Dall'Alba, Paolo Fiorini

Figure 1 for CIPCaD-Bench: Continuous Industrial Process datasets for benchmarking Causal Discovery methods
Figure 2 for CIPCaD-Bench: Continuous Industrial Process datasets for benchmarking Causal Discovery methods
Figure 3 for CIPCaD-Bench: Continuous Industrial Process datasets for benchmarking Causal Discovery methods
Figure 4 for CIPCaD-Bench: Continuous Industrial Process datasets for benchmarking Causal Discovery methods

Causal relationships are commonly examined in manufacturing processes to support faults investigations, perform interventions, and make strategic decisions. Industry 4.0 has made available an increasing amount of data that enable data-driven Causal Discovery (CD). Considering the growing number of recently proposed CD methods, it is necessary to introduce strict benchmarking procedures on publicly available datasets since they represent the foundation for a fair comparison and validation of different methods. This work introduces two novel public datasets for CD in continuous manufacturing processes. The first dataset employs the well-known Tennessee Eastman simulator for fault detection and process control. The second dataset is extracted from an ultra-processed food manufacturing plant, and it includes a description of the plant, as well as multiple ground truths. These datasets are used to propose a benchmarking procedure based on different metrics and evaluated on a wide selection of CD algorithms. This work allows testing CD methods in realistic conditions enabling the selection of the most suitable method for specific target applications. The datasets are available at the following link: https://github.com/giovanniMen

* Supplementary Materials at: https://github.com/giovanniMen/CPCaD-Bench 
Viaarxiv icon

Colonoscopy Navigation using End-to-End Deep Visuomotor Control: A User Study

Jun 30, 2022
Ameya Pore, Martina Finocchiaro, Diego Dall'Alba, Albert Hernansanz, Gastone Ciuti, Alberto Arezzo, Arianna Menciassi, Alicia Casals, Paolo Fiorini

Figure 1 for Colonoscopy Navigation using End-to-End Deep Visuomotor Control: A User Study
Figure 2 for Colonoscopy Navigation using End-to-End Deep Visuomotor Control: A User Study
Figure 3 for Colonoscopy Navigation using End-to-End Deep Visuomotor Control: A User Study
Figure 4 for Colonoscopy Navigation using End-to-End Deep Visuomotor Control: A User Study

Flexible endoscopes for colonoscopy present several limitations due to their inherent complexity, resulting in patient discomfort and lack of intuitiveness for clinicians. Robotic devices together with autonomous control represent a viable solution to reduce the workload of endoscopists and the training time while improving the overall procedure outcome. Prior works on autonomous endoscope control use heuristic policies that limit their generalisation to the unstructured and highly deformable colon environment and require frequent human intervention. This work proposes an image-based control of the endoscope using Deep Reinforcement Learning, called Deep Visuomotor Control (DVC), to exhibit adaptive behaviour in convoluted sections of the colon tract. DVC learns a mapping between the endoscopic images and the control signal of the endoscope. A first user study of 20 expert gastrointestinal endoscopists was carried out to compare their navigation performance with DVC policies using a realistic virtual simulator. The results indicate that DVC shows equivalent performance on several assessment parameters, being more safer. Moreover, a second user study with 20 novice participants was performed to demonstrate easier human supervision compared to a state-of-the-art heuristic control policy. Seamless supervision of colonoscopy procedures would enable interventionists to focus on the medical decision rather than on the control problem of the endoscope.

* Accepted in IROS2022 
Viaarxiv icon

Deliberation in autonomous robotic surgery: a framework for handling anatomical uncertainty

Mar 10, 2022
Eleonora Tagliabue, Daniele Meli, Diego Dall'Alba, Paolo Fiorini

Figure 1 for Deliberation in autonomous robotic surgery: a framework for handling anatomical uncertainty
Figure 2 for Deliberation in autonomous robotic surgery: a framework for handling anatomical uncertainty
Figure 3 for Deliberation in autonomous robotic surgery: a framework for handling anatomical uncertainty
Figure 4 for Deliberation in autonomous robotic surgery: a framework for handling anatomical uncertainty

Autonomous robotic surgery requires deliberation, i.e. the ability to plan and execute a task adapting to uncertain and dynamic environments. Uncertainty in the surgical domain is mainly related to the partial pre-operative knowledge about patient-specific anatomical properties. In this paper, we introduce a logic-based framework for surgical tasks with deliberative functions of monitoring and learning. The DEliberative Framework for Robot-Assisted Surgery (DEFRAS) estimates a pre-operative patient-specific plan, and executes it while continuously measuring the applied force obtained from a biomechanical pre-operative model. Monitoring module compares this model with the actual situation reconstructed from sensors. In case of significant mismatch, the learning module is invoked to update the model, thus improving the estimate of the exerted force. DEFRAS is validated both in simulated and real environment with da Vinci Research Kit executing soft tissue retraction. Compared with state-of-the art related works, the success rate of the task is improved while minimizing the interaction with the tissue to prevent unintentional damage.

* 2022 International Conference on Robotics and Automation 
Viaarxiv icon

Learning from Demonstrations for Autonomous Soft-tissue Retraction

Oct 01, 2021
Ameya Pore, Eleonora Tagliabue, Marco Piccinelli, Diego Dall'Alba, Alicia Casals, Paolo Fiorini

Figure 1 for Learning from Demonstrations for Autonomous Soft-tissue Retraction
Figure 2 for Learning from Demonstrations for Autonomous Soft-tissue Retraction
Figure 3 for Learning from Demonstrations for Autonomous Soft-tissue Retraction
Figure 4 for Learning from Demonstrations for Autonomous Soft-tissue Retraction

The current research focus in Robot-Assisted Minimally Invasive Surgery (RAMIS) is directed towards increasing the level of robot autonomy, to place surgeons in a supervisory position. Although Learning from Demonstrations (LfD) approaches are among the preferred ways for an autonomous surgical system to learn expert gestures, they require a high number of demonstrations and show poor generalization to the variable conditions of the surgical environment. In this work, we propose an LfD methodology based on Generative Adversarial Imitation Learning (GAIL) that is built on a Deep Reinforcement Learning (DRL) setting. GAIL combines generative adversarial networks to learn the distribution of expert trajectories with a DRL setting to ensure generalisation of trajectories providing human-like behaviour. We consider automation of tissue retraction, a common RAMIS task that involves soft tissues manipulation to expose a region of interest. In our proposed methodology, a small set of expert trajectories can be acquired through the da Vinci Research Kit (dVRK) and used to train the proposed LfD method inside a simulated environment. Results indicate that our methodology can accomplish the tissue retraction task with human-like behaviour while being more sample-efficient than the baseline DRL method. Towards the end, we show that the learnt policies can be successfully transferred to the real robotic platform and deployed for soft tissue retraction on a synthetic phantom.

* Accepted in IEEE International Symposium of Medical Robotics (ISMR 2021) 
Viaarxiv icon

Safe Reinforcement Learning using Formal Verification for Tissue Retraction in Autonomous Robotic-Assisted Surgery

Sep 06, 2021
Ameya Pore, Davide Corsi, Enrico Marchesini, Diego Dall'Alba, Alicia Casals, Alessandro Farinelli, Paolo Fiorini

Figure 1 for Safe Reinforcement Learning using Formal Verification for Tissue Retraction in Autonomous Robotic-Assisted Surgery
Figure 2 for Safe Reinforcement Learning using Formal Verification for Tissue Retraction in Autonomous Robotic-Assisted Surgery
Figure 3 for Safe Reinforcement Learning using Formal Verification for Tissue Retraction in Autonomous Robotic-Assisted Surgery
Figure 4 for Safe Reinforcement Learning using Formal Verification for Tissue Retraction in Autonomous Robotic-Assisted Surgery

Deep Reinforcement Learning (DRL) is a viable solution for automating repetitive surgical subtasks due to its ability to learn complex behaviours in a dynamic environment. This task automation could lead to reduced surgeon's cognitive workload, increased precision in critical aspects of the surgery, and fewer patient-related complications. However, current DRL methods do not guarantee any safety criteria as they maximise cumulative rewards without considering the risks associated with the actions performed. Due to this limitation, the application of DRL in the safety-critical paradigm of robot-assisted Minimally Invasive Surgery (MIS) has been constrained. In this work, we introduce a Safe-DRL framework that incorporates safety constraints for the automation of surgical subtasks via DRL training. We validate our approach in a virtual scene that replicates a tissue retraction task commonly occurring in multiple phases of an MIS. Furthermore, to evaluate the safe behaviour of the robotic arms, we formulate a formal verification tool for DRL methods that provides the probability of unsafe configurations. Our results indicate that a formal analysis guarantees safety with high confidence such that the robotic instruments operate within the safe workspace and avoid hazardous interaction with other anatomical structures.

* 7 pages, 6 figures 
Viaarxiv icon

Towards Hierarchical Task Decomposition using Deep Reinforcement Learning for Pick and Place Subtasks

Mar 01, 2021
Luca Marzari, Ameya Pore, Diego Dall'Alba, Gerardo Aragon-Camarasa, Alessandro Farinelli, Paolo Fiorini

Figure 1 for Towards Hierarchical Task Decomposition using Deep Reinforcement Learning for Pick and Place Subtasks
Figure 2 for Towards Hierarchical Task Decomposition using Deep Reinforcement Learning for Pick and Place Subtasks
Figure 3 for Towards Hierarchical Task Decomposition using Deep Reinforcement Learning for Pick and Place Subtasks
Figure 4 for Towards Hierarchical Task Decomposition using Deep Reinforcement Learning for Pick and Place Subtasks

Deep Reinforcement Learning (DRL) is emerging as a promising approach to generate adaptive behaviors for robotic platforms. However, a major drawback of using DRL is the data-hungry training regime that requires millions of trial and error attempts, which is impractical when running experiments on robotic systems. To address this issue, we propose a multi-subtask reinforcement learning method where complex tasks are decomposed manually into low-level subtasks by leveraging human domain knowledge. These subtasks can be parametrized as expert networks and learned via existing DRL methods. Trained subtasks can then be composed with a high-level choreographer. As a testbed, we use a pick and place robotic simulator to demonstrate our methodology, and show that our method outperforms an imitation learning-based method and reaches a high success rate compared to an end-to-end learning approach. Moreover, we transfer the learned behavior in a different robotic environment that allows us to exploit sim-to-real transfer and demonstrate the trajectories in a real robotic system. Our training regime is carried out using a central processing unit (CPU)-based system, which demonstrates the data-efficient properties of our approach.

* This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible 
Viaarxiv icon