Alert button
Picture for Manish Sahu

Manish Sahu

Alert button

A force-sensing surgical drill for real-time force feedback in robotic mastoidectomy

Apr 05, 2023
Yuxin Chen, Anna Goodridge, Manish Sahu, Aditi Kishore, Seena Vafaee, Harsha Mohan, Katherina Sapozhnikov, Francis Creighton, Russell Taylor, Deepa Galaiya

Figure 1 for A force-sensing surgical drill for real-time force feedback in robotic mastoidectomy
Figure 2 for A force-sensing surgical drill for real-time force feedback in robotic mastoidectomy
Figure 3 for A force-sensing surgical drill for real-time force feedback in robotic mastoidectomy
Figure 4 for A force-sensing surgical drill for real-time force feedback in robotic mastoidectomy

Purpose: Robotic assistance in otologic surgery can reduce the task load of operating surgeons during the removal of bone around the critical structures in the lateral skull base. However, safe deployment into the anatomical passageways necessitates the development of advanced sensing capabilities to actively limit the interaction forces between the surgical tools and critical anatomy. Methods: We introduce a surgical drill equipped with a force sensor that is capable of measuring accurate tool-tissue interaction forces to enable force control and feedback to surgeons. The design, calibration and validation of the force-sensing surgical drill mounted on a cooperatively controlled surgical robot are described in this work. Results: The force measurements on the tip of the surgical drill are validated with raw-egg drilling experiments, where a force sensor mounted below the egg serves as ground truth. The average root mean square error (RMSE) for points and path drilling experiments are 41.7 (pm 12.2) mN and 48.3 (pm 13.7) mN respectively. Conclusions: The force-sensing prototype measures forces with sub-millinewton resolution and the results demonstrate that the calibrated force-sensing drill generates accurate force measurements with minimal error compared to the measured drill forces. The development of such sensing capabilities is crucial for the safe use of robotic systems in a clinical context.

* Accepted at IPCAI2023 
Viaarxiv icon

TAToo: Vision-based Joint Tracking of Anatomy and Tool for Skull-base Surgery

Dec 29, 2022
Zhaoshuo Li, Hongchao Shu, Ruixing Liang, Anna Goodridge, Manish Sahu, Francis X. Creighton, Russell H. Taylor, Mathias Unberath

Figure 1 for TAToo: Vision-based Joint Tracking of Anatomy and Tool for Skull-base Surgery
Figure 2 for TAToo: Vision-based Joint Tracking of Anatomy and Tool for Skull-base Surgery
Figure 3 for TAToo: Vision-based Joint Tracking of Anatomy and Tool for Skull-base Surgery
Figure 4 for TAToo: Vision-based Joint Tracking of Anatomy and Tool for Skull-base Surgery

Purpose: Tracking the 3D motion of the surgical tool and the patient anatomy is a fundamental requirement for computer-assisted skull-base surgery. The estimated motion can be used both for intra-operative guidance and for downstream skill analysis. Recovering such motion solely from surgical videos is desirable, as it is compliant with current clinical workflows and instrumentation. Methods: We present Tracker of Anatomy and Tool (TAToo). TAToo jointly tracks the rigid 3D motion of patient skull and surgical drill from stereo microscopic videos. TAToo estimates motion via an iterative optimization process in an end-to-end differentiable form. For robust tracking performance, TAToo adopts a probabilistic formulation and enforces geometric constraints on the object level. Results: We validate TAToo on both simulation data, where ground truth motion is available, as well as on anthropomorphic phantom data, where optical tracking provides a strong baseline. We report sub-millimeter and millimeter inter-frame tracking accuracy for skull and drill, respectively, with rotation errors below 1{\deg}. We further illustrate how TAToo may be used in a surgical navigation setting. Conclusion: We present TAToo, which simultaneously tracks the surgical tool and the patient anatomy in skull-base surgery. TAToo directly predicts the motion from surgical videos, without the need of any markers. Our results show that the performance of TAToo compares favorably to competing approaches. Future work will include fine-tuning of our depth network to reach a 1 mm clinical accuracy goal desired for surgical applications in the skull base.

* 12 pages 
Viaarxiv icon

Twin-S: A Digital Twin for Skull-base Surgery

Nov 21, 2022
Hongchao Shu, Ruixing Liang, Zhaoshuo Li, Anna Goodridge, Xiangyu Zhang, Hao Ding, Nimesh Nagururu, Manish Sahu, Francis X. Creighton, Russell H. Taylor, Adnan Munawar, Mathias Unberath

Figure 1 for Twin-S: A Digital Twin for Skull-base Surgery
Figure 2 for Twin-S: A Digital Twin for Skull-base Surgery
Figure 3 for Twin-S: A Digital Twin for Skull-base Surgery
Figure 4 for Twin-S: A Digital Twin for Skull-base Surgery

Purpose: Digital twins are virtual interactive models of the real world, exhibiting identical behavior and properties. In surgical applications, computational analysis from digital twins can be used, for example, to enhance situational awareness. Methods: We present a digital twin framework for skull-base surgeries, named Twin-S, which can be integrated within various image-guided interventions seamlessly. Twin-S combines high-precision optical tracking and real-time simulation. We rely on rigorous calibration routines to ensure that the digital twin representation precisely mimics all real-world processes. Twin-S models and tracks the critical components of skull-base surgery, including the surgical tool, patient anatomy, and surgical camera. Significantly, Twin-S updates and reflects real-world drilling of the anatomical model in frame rate. Results: We extensively evaluate the accuracy of Twin-S, which achieves an average 1.39 mm error during the drilling process. We further illustrate how segmentation masks derived from the continuously updated digital twin can augment the surgical microscope view in a mixed reality setting, where bone requiring ablation is highlighted to provide surgeons additional situational awareness. Conclusion: We present Twin-S, a digital twin environment for skull-base surgery. Twin-S tracks and updates the virtual model in real-time given measurements from modern tracking technologies. Future research on complementing optical tracking with higher-precision vision-based approaches may further increase the accuracy of Twin-S.

Viaarxiv icon

Simulation-to-Real domain adaptation with teacher-student learning for endoscopic instrument segmentation

Mar 02, 2021
Manish Sahu, Anirban Mukhopadhyay, Stefan Zachow

Figure 1 for Simulation-to-Real domain adaptation with teacher-student learning for endoscopic instrument segmentation
Figure 2 for Simulation-to-Real domain adaptation with teacher-student learning for endoscopic instrument segmentation
Figure 3 for Simulation-to-Real domain adaptation with teacher-student learning for endoscopic instrument segmentation
Figure 4 for Simulation-to-Real domain adaptation with teacher-student learning for endoscopic instrument segmentation

Purpose: Segmentation of surgical instruments in endoscopic videos is essential for automated surgical scene understanding and process modeling. However, relying on fully supervised deep learning for this task is challenging because manual annotation occupies valuable time of the clinical experts. Methods: We introduce a teacher-student learning approach that learns jointly from annotated simulation data and unlabeled real data to tackle the erroneous learning problem of the current consistency-based unsupervised domain adaptation framework. Results: Empirical results on three datasets highlight the effectiveness of the proposed framework over current approaches for the endoscopic instrument segmentation task. Additionally, we provide analysis of major factors affecting the performance on all datasets to highlight the strengths and failure modes of our approach. Conclusion: We show that our proposed approach can successfully exploit the unlabeled real endoscopic video frames and improve generalization performance over pure simulation-based training and the previous state-of-the-art. This takes us one step closer to effective segmentation of surgical tools in the annotation scarce setting.

* Accepted at IPCAI2021 
Viaarxiv icon

Endo-Sim2Real: Consistency learning-based domain adaptation for instrument segmentation

Jul 22, 2020
Manish Sahu, Ronja Strömsdörfer, Anirban Mukhopadhyay, Stefan Zachow

Figure 1 for Endo-Sim2Real: Consistency learning-based domain adaptation for instrument segmentation
Figure 2 for Endo-Sim2Real: Consistency learning-based domain adaptation for instrument segmentation
Figure 3 for Endo-Sim2Real: Consistency learning-based domain adaptation for instrument segmentation
Figure 4 for Endo-Sim2Real: Consistency learning-based domain adaptation for instrument segmentation

Surgical tool segmentation in endoscopic videos is an important component of computer assisted interventions systems. Recent success of image-based solutions using fully-supervised deep learning approaches can be attributed to the collection of big labeled datasets. However, the annotation of a big dataset of real videos can be prohibitively expensive and time consuming. Computer simulations could alleviate the manual labeling problem, however, models trained on simulated data do not generalize to real data. This work proposes a consistency-based framework for joint learning of simulated and real (unlabeled) endoscopic data to bridge this performance generalization issue. Empirical results on two data sets (15 videos of the Cholec80 and EndoVis'15 dataset) highlight the effectiveness of the proposed \emph{Endo-Sim2Real} method for instrument segmentation. We compare the segmentation of the proposed approach with state-of-the-art solutions and show that our method improves segmentation both in terms of quality and quantity.

* Accepted at MICCAI2020 
Viaarxiv icon

Tool and Phase recognition using contextual CNN features

Oct 27, 2016
Manish Sahu, Anirban Mukhopadhyay, Angelika Szengel, Stefan Zachow

Figure 1 for Tool and Phase recognition using contextual CNN features
Figure 2 for Tool and Phase recognition using contextual CNN features
Figure 3 for Tool and Phase recognition using contextual CNN features

A transfer learning method for generating features suitable for surgical tools and phase recognition from the ImageNet classification features [1] is proposed here. In addition, methods are developed for generating contextual features and combining them with time series analysis for final classification using multi-class random forest. The proposed pipeline is tested over the training and testing datasets of M2CAI16 challenges: tool and phase detection. Encouraging results are obtained by leave-one-out cross validation evaluation on the training dataset.

* MICCAI M2CAI 2016 Surgical tool & phase detection challenge report 
Viaarxiv icon