Alert button
Picture for Peter Kazanzides

Peter Kazanzides

Alert button

Calibration and evaluation of a motion measurement system for PET imaging studies

Nov 29, 2023
Junxiang Wang, Ti Wu, Iulian I. Iordachita, Peter Kazanzides

Positron Emission Tomography (PET) enables functional imaging of deep brain structures, but the bulk and weight of current systems preclude their use during many natural human activities, such as locomotion. The proposed long-term solution is to construct a robotic system that can support an imaging system surrounding the subject's head, and then move the system to accommodate natural motion. This requires a system to measure the motion of the head with respect to the imaging ring, for use by both the robotic system and the image reconstruction software. We report here the design, calibration, and experimental evaluation of a parallel string encoder mechanism for sensing this motion. Our results indicate that with kinematic calibration, the measurement system can achieve accuracy within 0.5mm, especially for small motions.

* Journal of Medical Robotics Research, vol.08, n.01n02, p.2340003, 2023  
* arXiv admin note: text overlap with arXiv:2311.17863 
Viaarxiv icon

Evaluation of a measurement system for PET imaging studies

Nov 29, 2023
Junxiang Wang, Ti Wu, Iulian I. Iordachita, Peter Kazanzides

Positron Emission Tomography (PET) enables functional imaging of deep brain structures, but the bulk and weight of current systems preclude their use during many natural human activities, such as locomotion. The proposed long-term solution is to construct a robotic system that can support an imaging system surrounding the subject's head, and then move the system to accommodate natural motion. This requires a system to measure the motion of the head with respect to the imaging ring, for use by both the robotic system and the image reconstruction software. We report here the design and experimental evaluation of a parallel string encoder mechanism for sensing this motion. Our preliminary results indicate that the measurement system may achieve accuracy within 0.5 mm, especially for small motions, with improved accuracy possible through kinematic calibration.

* 2022 International Symposium on Medical Robotics (ISMR), GA, USA, 2022, pp. 1-6  
Viaarxiv icon

Method for robotic motion compensation during PET imaging of mobile subjects

Nov 29, 2023
Junxiang Wang, Iulian I. Iordachita, Peter Kazanzides

Studies of the human brain during natural activities, such as locomotion, would benefit from the ability to image deep brain structures during these activities. While Positron Emission Tomography (PET) can image these structures, the bulk and weight of current scanners are not compatible with the desire for a wearable device. This has motivated the design of a robotic system to support a PET imaging system around the subject's head and to move the system to accommodate natural motion. We report here the design and experimental evaluation of a prototype robotic system that senses motion of a subject's head, using parallel string encoders connected between the robot-supported imaging ring and a helmet worn by the subject. This measurement is used to robotically move the imaging ring (coarse motion correction) and to compensate for residual motion during image reconstruction (fine motion correction). Minimization of latency and measurement error are the key design goals, respectively, for coarse and fine motion correction. The system is evaluated using recorded human head motions during locomotion, with a mock imaging system consisting of lasers and cameras, and is shown to provide an overall system latency of about 80 ms, which is sufficient for coarse motion correction and collision avoidance, as well as a measurement accuracy of about 0.5 mm for fine motion correction.

* 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 
Viaarxiv icon

Improving Surgical Situational Awareness with Signed Distance Field: A Pilot Study in Virtual Reality

Mar 03, 2023
Hisashi Ishida, Juan Antonio Barragan, Adnan Munawar, Zhaoshuo Li, Peter Kazanzides, Michael Kazhdan, Danielle Trakimas, Francis X. Creighton, Russell H. Taylor

Figure 1 for Improving Surgical Situational Awareness with Signed Distance Field: A Pilot Study in Virtual Reality
Figure 2 for Improving Surgical Situational Awareness with Signed Distance Field: A Pilot Study in Virtual Reality
Figure 3 for Improving Surgical Situational Awareness with Signed Distance Field: A Pilot Study in Virtual Reality
Figure 4 for Improving Surgical Situational Awareness with Signed Distance Field: A Pilot Study in Virtual Reality

The introduction of image-guided surgical navigation (IGSN) has greatly benefited technically demanding surgical procedures by providing real-time support and guidance to the surgeon during surgery. To develop effective IGSN, a careful selection of the information provided to the surgeon is needed. However, identifying optimal feedback modalities is challenging due to the broad array of available options. To address this problem, we have developed an open-source library that facilitates the development of multimodal navigation systems in a wide range of surgical procedures relying on medical imaging data. To provide guidance, our system calculates the minimum distance between the surgical instrument and the anatomy and then presents this information to the user through different mechanisms. The real-time performance of our approach is achieved by calculating Signed Distance Fields at initialization from segmented anatomical volumes. Using this framework, we developed a multimodal surgical navigation system to help surgeons navigate anatomical variability in a skull-base surgery simulation environment. Three different feedback modalities were explored: visual, auditory, and haptic. To evaluate the proposed system, a pilot user study was conducted in which four clinicians performed mastoidectomy procedures with and without guidance. Each condition was assessed using objective performance and subjective workload metrics. This pilot user study showed improvements in procedural safety without additional time or workload. These results demonstrate our pipeline's successful use case in the context of mastoidectomy.

* First two authors contributed equally. 6 pages 
Viaarxiv icon

Fully Immersive Virtual Reality for Skull-base Surgery: Surgical Training and Beyond

Feb 27, 2023
Adnan Munawar, Zhaoshuo Li, Nimesh Nagururu, Danielle Trakimas, Peter Kazanzides, Russell H. Taylor, Francis X. Creighton

Figure 1 for Fully Immersive Virtual Reality for Skull-base Surgery: Surgical Training and Beyond
Figure 2 for Fully Immersive Virtual Reality for Skull-base Surgery: Surgical Training and Beyond
Figure 3 for Fully Immersive Virtual Reality for Skull-base Surgery: Surgical Training and Beyond
Figure 4 for Fully Immersive Virtual Reality for Skull-base Surgery: Surgical Training and Beyond

Purpose: A fully immersive virtual reality system (FIVRS), where surgeons can practice procedures on virtual anatomies, is a scalable and cost-effective alternative to cadaveric training. The fully digitized virtual surgeries can also be used to assess the surgeon's skills automatically using metrics that are otherwise hard to collect in reality. Thus, we present FIVRS, a virtual reality (VR) system designed for skull-base surgery, which combines high-fidelity surgical simulation software with a real hardware setup. Methods: FIVRS integrates software and hardware features to allow surgeons to use normal clinical workflows for VR. FIVRS uses advanced rendering designs and drilling algorithms for realistic surgery. We also design a head-mounted display with ergonomics similar to that of surgical microscopes. A plethora of digitized data of VR surgery are recorded, including eye gaze, motion, force and video of the surgery for post-analysis. A user-friendly interface is also designed to ease the learning curve of using FIVRS. Results: We present results from a user study involving surgeons to showcase the efficacy FIVRS and its generated data. Conclusion: We present FIVRS, a fully immersive VR system for skull base surgery. FIVRS features a realistic software simulation coupled with modern hardware for improved realism. The system is completely open-source and provides feature-rich data in an industry-standard format.

Viaarxiv icon

Feature-aggregated spatiotemporal spine surface estimation for wearable patch ultrasound volumetric imaging

Nov 11, 2022
Baichuan Jiang, Keshuai Xu, Ahbay Moghekar, Peter Kazanzides, Emad Boctor

Figure 1 for Feature-aggregated spatiotemporal spine surface estimation for wearable patch ultrasound volumetric imaging
Figure 2 for Feature-aggregated spatiotemporal spine surface estimation for wearable patch ultrasound volumetric imaging
Figure 3 for Feature-aggregated spatiotemporal spine surface estimation for wearable patch ultrasound volumetric imaging
Figure 4 for Feature-aggregated spatiotemporal spine surface estimation for wearable patch ultrasound volumetric imaging

Clear identification of bone structures is crucial for ultrasound-guided lumbar interventions, but it can be challenging due to the complex shapes of the self-shadowing vertebra anatomy and the extensive background speckle noise from the surrounding soft tissue structures. Therefore, we propose to use a patch-like wearable ultrasound solution to capture the reflective bone surfaces from multiple imaging angles and create 3D bone representations for interventional guidance. In this work, we will present our method for estimating the vertebra bone surfaces by using a spatiotemporal U-Net architecture learning from the B-Mode image and aggregated feature maps of hand-crafted filters. The methods are evaluated on spine phantom image data collected by our proposed miniaturized wearable "patch" ultrasound device, and the results show that a significant improvement on baseline method can be achieved with promising accuracy. Equipped with this surface estimation framework, our wearable ultrasound system can potentially provide intuitive and accurate interventional guidance for clinicians in augmented reality setting.

Viaarxiv icon

Learning Deep Nets for Gravitational Dynamics with Unknown Disturbance through Physical Knowledge Distillation: Initial Feasibility Study

Oct 04, 2022
Hongbin Lin, Qian Gao, Xiangyu Chu, Qi Dou, Anton Deguet, Peter Kazanzides, K. W. Samuel Au

Figure 1 for Learning Deep Nets for Gravitational Dynamics with Unknown Disturbance through Physical Knowledge Distillation: Initial Feasibility Study
Figure 2 for Learning Deep Nets for Gravitational Dynamics with Unknown Disturbance through Physical Knowledge Distillation: Initial Feasibility Study
Figure 3 for Learning Deep Nets for Gravitational Dynamics with Unknown Disturbance through Physical Knowledge Distillation: Initial Feasibility Study
Figure 4 for Learning Deep Nets for Gravitational Dynamics with Unknown Disturbance through Physical Knowledge Distillation: Initial Feasibility Study

Learning high-performance deep neural networks for dynamic modeling of high Degree-Of-Freedom (DOF) robots remains challenging due to the sampling complexity. Typical unknown system disturbance caused by unmodeled dynamics (such as internal compliance, cables) further exacerbates the problem. In this paper, a novel framework characterized by both high data efficiency and disturbance-adapting capability is proposed to address the problem of modeling gravitational dynamics using deep nets in feedforward gravity compensation control for high-DOF master manipulators with unknown disturbance. In particular, Feedforward Deep Neural Networks (FDNNs) are learned from both prior knowledge of an existing analytical model and observation of the robot system by Knowledge Distillation (KD). Through extensive experiments in high-DOF master manipulators with significant disturbance, we show that our method surpasses a standard Learning-from-Scratch (LfS) approach in terms of data efficiency and disturbance adaptation. Our initial feasibility study has demonstrated the potential of outperforming the analytical teacher model as the training data increases.

* IEEE ROBOTICS AND AUTOMATION LETTERS, VOL. 6, NO. 2, APRIL 2021  
Viaarxiv icon

CaRTS: Causality-driven Robot Tool Segmentation from Vision and Kinematics Data

Apr 06, 2022
Hao Ding, Jintan Zhang, Peter Kazanzides, Jie Ying Wu, Mathias Unberath

Figure 1 for CaRTS: Causality-driven Robot Tool Segmentation from Vision and Kinematics Data
Figure 2 for CaRTS: Causality-driven Robot Tool Segmentation from Vision and Kinematics Data
Figure 3 for CaRTS: Causality-driven Robot Tool Segmentation from Vision and Kinematics Data
Figure 4 for CaRTS: Causality-driven Robot Tool Segmentation from Vision and Kinematics Data

Vision-based segmentation of the robotic tool during robot-assisted surgery enables downstream applications, such as augmented reality feedback, while allowing for inaccuracies in robot kinematics. With the introduction of deep learning, many methods were presented to solve instrument segmentation directly and solely from images. While these approaches made remarkable progress on benchmark datasets, fundamental challenges pertaining to their robustness remain. We present CaRTS, a causality-driven robot tool segmentation algorithm, that is designed based on a complementary causal model of the robot tool segmentation task. Rather than directly inferring segmentation masks from observed images, CaRTS iteratively aligns tool models with image observations by updating the initially incorrect robot kinematic parameters through forward kinematics and differentiable rendering to optimize image feature similarity end-to-end. We benchmark CaRTS with competing techniques on both synthetic as well as real data from the dVRK, generated in precisely controlled scenarios to allow for counterfactual synthesis. On training-domain test data, CaRTS achieves a Dice score of 93.4 that is preserved well (Dice score of 91.8) when tested on counterfactual altered test data, exhibiting low brightness, smoke, blood, and altered background patterns. This compares favorably to Dice scores of 95.0 and 62.8, respectively, of a purely image-based method trained and tested on the same data. Future work will involve accelerating CaRTS to achieve video framerate and estimating the impact occlusion has in practice. Despite these limitations, our results are promising: In addition to achieving high segmentation accuracy, CaRTS provides estimates of the true robot kinematics, which may benefit applications such as force estimation.

Viaarxiv icon

Virtual Reality for Synergistic Surgical Training and Data Generation

Nov 15, 2021
Adnan Munawar, Zhaoshuo Li, Punit Kunjam, Nimesh Nagururu, Andy S. Ding, Peter Kazanzides, Thomas Looi, Francis X. Creighton, Russell H. Taylor, Mathias Unberath

Figure 1 for Virtual Reality for Synergistic Surgical Training and Data Generation
Figure 2 for Virtual Reality for Synergistic Surgical Training and Data Generation
Figure 3 for Virtual Reality for Synergistic Surgical Training and Data Generation
Figure 4 for Virtual Reality for Synergistic Surgical Training and Data Generation

Surgical simulators not only allow planning and training of complex procedures, but also offer the ability to generate structured data for algorithm development, which may be applied in image-guided computer assisted interventions. While there have been efforts on either developing training platforms for surgeons or data generation engines, these two features, to our knowledge, have not been offered together. We present our developments of a cost-effective and synergistic framework, named Asynchronous Multibody Framework Plus (AMBF+), which generates data for downstream algorithm development simultaneously with users practicing their surgical skills. AMBF+ offers stereoscopic display on a virtual reality (VR) device and haptic feedback for immersive surgical simulation. It can also generate diverse data such as object poses and segmentation maps. AMBF+ is designed with a flexible plugin setup which allows for unobtrusive extension for simulation of different surgical procedures. We show one use case of AMBF+ as a virtual drilling simulator for lateral skull-base surgery, where users can actively modify the patient anatomy using a virtual surgical drill. We further demonstrate how the data generated can be used for validating and training downstream computer vision algorithms

* MICCAI 2021 AE-CAI "Outstanding Paper Award" Code: https://github.com/LCSR-SICKKIDS/volumetric_drilling 
Viaarxiv icon

Accelerating Surgical Robotics Research: Reviewing 10 Years of Research with the dVRK

May 13, 2021
Claudia D'Ettorre, Andrea Mariani, Agostino Stilli, Ferdinando Rodriguez y Baena, Pietro Valdastri, Anton Deguet, Peter Kazanzides, Russell H. Taylor, Gregory S. Fischer, Simon P. DiMaio, Arianna Menciassi, Danail Stoyanov

Figure 1 for Accelerating Surgical Robotics Research: Reviewing 10 Years of Research with the dVRK
Figure 2 for Accelerating Surgical Robotics Research: Reviewing 10 Years of Research with the dVRK
Figure 3 for Accelerating Surgical Robotics Research: Reviewing 10 Years of Research with the dVRK
Figure 4 for Accelerating Surgical Robotics Research: Reviewing 10 Years of Research with the dVRK

Robotic-assisted surgery is now well-established in clinical practice and has become the gold standard clinical treatment option for several clinical indications. The field of robotic-assisted surgery is expected to grow substantially in the next decade with a range of new robotic devices emerging to address unmet clinical needs across different specialities. A vibrant surgical robotics research community is pivotal for conceptualizing such new systems as well as for developing and training the engineers and scientists to translate them into practice. The da Vinci Research Kit (dVRK), an academic and industry collaborative effort to re-purpose decommissioned da Vinci surgical systems (Intuitive Surgical Inc, CA, USA) as a research platform for surgical robotics research, has been a key initiative for addressing a barrier to entry for new research groups in surgical robotics. In this paper, we present an extensive review of the publications that have been facilitated by the dVRK over the past decade. We classify research efforts into different categories and outline some of the major challenges and needs for the robotics community to maintain this initiative and build upon it.

Viaarxiv icon