Alert button
Picture for Jie Ying Wu

Jie Ying Wu

Alert button

MeshBrush: Painting the Anatomical Mesh with Neural Stylization for Endoscopy

Add code
Bookmark button
Alert button
Apr 03, 2024
John J. Han, Ayberk Acar, Nicholas Kavoussi, Jie Ying Wu

Viaarxiv icon

Zero-shot Prompt-based Video Encoder for Surgical Gesture Recognition

Add code
Bookmark button
Alert button
Mar 28, 2024
Mingxing Rao, Yinhong Qin, Soheil Kolouri, Jie Ying Wu, Daniel Moyer

Figure 1 for Zero-shot Prompt-based Video Encoder for Surgical Gesture Recognition
Figure 2 for Zero-shot Prompt-based Video Encoder for Surgical Gesture Recognition
Figure 3 for Zero-shot Prompt-based Video Encoder for Surgical Gesture Recognition
Figure 4 for Zero-shot Prompt-based Video Encoder for Surgical Gesture Recognition
Viaarxiv icon

Depth Anything in Medical Images: A Comparative Study

Add code
Bookmark button
Alert button
Jan 29, 2024
John J. Han, Ayberk Acar, Callahan Henry, Jie Ying Wu

Viaarxiv icon

Eye Tracking for Tele-robotic Surgery: A Comparative Evaluation of Head-worn Solutions

Add code
Bookmark button
Alert button
Oct 18, 2023
Regine Büter, Roger D. Soberanis-Mukul, Paola Ruiz Puentes, Ahmed Ghazi, Jie Ying Wu, Mathias Unberath

Figure 1 for Eye Tracking for Tele-robotic Surgery: A Comparative Evaluation of Head-worn Solutions
Figure 2 for Eye Tracking for Tele-robotic Surgery: A Comparative Evaluation of Head-worn Solutions
Figure 3 for Eye Tracking for Tele-robotic Surgery: A Comparative Evaluation of Head-worn Solutions
Viaarxiv icon

Surgical tool classification and localization: results and methods from the MICCAI 2022 SurgToolLoc challenge

Add code
Bookmark button
Alert button
May 11, 2023
Aneeq Zia, Kiran Bhattacharyya, Xi Liu, Max Berniker, Ziheng Wang, Rogerio Nespolo, Satoshi Kondo, Satoshi Kasai, Kousuke Hirasawa, Bo Liu, David Austin, Yiheng Wang, Michal Futrega, Jean-Francois Puget, Zhenqiang Li, Yoichi Sato, Ryo Fujii, Ryo Hachiuma, Mana Masuda, Hideo Saito, An Wang, Mengya Xu, Mobarakol Islam, Long Bai, Winnie Pang, Hongliang Ren, Chinedu Nwoye, Luca Sestini, Nicolas Padoy, Maximilian Nielsen, Samuel Schüttler, Thilo Sentker, Hümeyra Husseini, Ivo Baltruschat, Rüdiger Schmitz, René Werner, Aleksandr Matsun, Mugariya Farooq, Numan Saaed, Jose Renato Restom Viera, Mohammad Yaqub, Neil Getty, Fangfang Xia, Zixuan Zhao, Xiaotian Duan, Xing Yao, Ange Lou, Hao Yang, Jintong Han, Jack Noble, Jie Ying Wu, Tamer Abdulbaki Alshirbaji, Nour Aldeen Jalal, Herag Arabian, Ning Ding, Knut Moeller, Weiliang Chen, Quan He, Lena Maier-Hein, Danail Stoyanov, Stefanie Speidel, Anthony Jarc

Figure 1 for Surgical tool classification and localization: results and methods from the MICCAI 2022 SurgToolLoc challenge
Figure 2 for Surgical tool classification and localization: results and methods from the MICCAI 2022 SurgToolLoc challenge
Figure 3 for Surgical tool classification and localization: results and methods from the MICCAI 2022 SurgToolLoc challenge
Figure 4 for Surgical tool classification and localization: results and methods from the MICCAI 2022 SurgToolLoc challenge
Viaarxiv icon

Rethinking Causality-driven Robot Tool Segmentation with Temporal Constraints

Add code
Bookmark button
Alert button
Nov 30, 2022
Hao Ding, Jie Ying Wu, Zhaoshuo Li, Mathias Unberath

Figure 1 for Rethinking Causality-driven Robot Tool Segmentation with Temporal Constraints
Figure 2 for Rethinking Causality-driven Robot Tool Segmentation with Temporal Constraints
Figure 3 for Rethinking Causality-driven Robot Tool Segmentation with Temporal Constraints
Figure 4 for Rethinking Causality-driven Robot Tool Segmentation with Temporal Constraints
Viaarxiv icon

CaRTS: Causality-driven Robot Tool Segmentation from Vision and Kinematics Data

Add code
Bookmark button
Alert button
Apr 06, 2022
Hao Ding, Jintan Zhang, Peter Kazanzides, Jie Ying Wu, Mathias Unberath

Figure 1 for CaRTS: Causality-driven Robot Tool Segmentation from Vision and Kinematics Data
Figure 2 for CaRTS: Causality-driven Robot Tool Segmentation from Vision and Kinematics Data
Figure 3 for CaRTS: Causality-driven Robot Tool Segmentation from Vision and Kinematics Data
Figure 4 for CaRTS: Causality-driven Robot Tool Segmentation from Vision and Kinematics Data
Viaarxiv icon

An Interpretable Approach to Automated Severity Scoring in Pelvic Trauma

Add code
Bookmark button
Alert button
May 21, 2021
Anna Zapaishchykova, David Dreizin, Zhaoshuo Li, Jie Ying Wu, Shahrooz Faghih Roohi, Mathias Unberath

Figure 1 for An Interpretable Approach to Automated Severity Scoring in Pelvic Trauma
Figure 2 for An Interpretable Approach to Automated Severity Scoring in Pelvic Trauma
Figure 3 for An Interpretable Approach to Automated Severity Scoring in Pelvic Trauma
Figure 4 for An Interpretable Approach to Automated Severity Scoring in Pelvic Trauma
Viaarxiv icon

Estimation of Trocar and Tool Interaction Forces on the da Vinci Research Kit with Two-Step Deep Learning

Add code
Bookmark button
Alert button
Dec 11, 2020
Jie Ying Wu, Nural Yilmaz, Peter Kazanzides, Ugur Tumerdem

Figure 1 for Estimation of Trocar and Tool Interaction Forces on the da Vinci Research Kit with Two-Step Deep Learning
Figure 2 for Estimation of Trocar and Tool Interaction Forces on the da Vinci Research Kit with Two-Step Deep Learning
Figure 3 for Estimation of Trocar and Tool Interaction Forces on the da Vinci Research Kit with Two-Step Deep Learning
Figure 4 for Estimation of Trocar and Tool Interaction Forces on the da Vinci Research Kit with Two-Step Deep Learning
Viaarxiv icon

Estimation of Trocar and Tool Interaction Forces on the da Vinci ResearchKit with Two-Step Deep Learning

Add code
Bookmark button
Alert button
Dec 02, 2020
Jie Ying Wu, Nural Yilmaz, Peter Kazanzides, Ugur Tumerdem

Figure 1 for Estimation of Trocar and Tool Interaction Forces on the da Vinci ResearchKit with Two-Step Deep Learning
Figure 2 for Estimation of Trocar and Tool Interaction Forces on the da Vinci ResearchKit with Two-Step Deep Learning
Figure 3 for Estimation of Trocar and Tool Interaction Forces on the da Vinci ResearchKit with Two-Step Deep Learning
Figure 4 for Estimation of Trocar and Tool Interaction Forces on the da Vinci ResearchKit with Two-Step Deep Learning
Viaarxiv icon