Alert button
Picture for Jinah Park

Jinah Park

Alert button

Why is the winner the best?

Mar 30, 2023
Matthias Eisenmann, Annika Reinke, Vivienn Weru, Minu Dietlinde Tizabi, Fabian Isensee, Tim J. Adler, Sharib Ali, Vincent Andrearczyk, Marc Aubreville, Ujjwal Baid, Spyridon Bakas, Niranjan Balu, Sophia Bano, Jorge Bernal, Sebastian Bodenstedt, Alessandro Casella, Veronika Cheplygina, Marie Daum, Marleen de Bruijne, Adrien Depeursinge, Reuben Dorent, Jan Egger, David G. Ellis, Sandy Engelhardt, Melanie Ganz, Noha Ghatwary, Gabriel Girard, Patrick Godau, Anubha Gupta, Lasse Hansen, Kanako Harada, Mattias Heinrich, Nicholas Heller, Alessa Hering, Arnaud Huaulmé, Pierre Jannin, Ali Emre Kavur, Oldřich Kodym, Michal Kozubek, Jianning Li, Hongwei Li, Jun Ma, Carlos Martín-Isla, Bjoern Menze, Alison Noble, Valentin Oreiller, Nicolas Padoy, Sarthak Pati, Kelly Payette, Tim Rädsch, Jonathan Rafael-Patiño, Vivek Singh Bawa, Stefanie Speidel, Carole H. Sudre, Kimberlin van Wijnen, Martin Wagner, Donglai Wei, Amine Yamlahi, Moi Hoon Yap, Chun Yuan, Maximilian Zenk, Aneeq Zia, David Zimmerer, Dogu Baran Aydogan, Binod Bhattarai, Louise Bloch, Raphael Brüngel, Jihoon Cho, Chanyeol Choi, Qi Dou, Ivan Ezhov, Christoph M. Friedrich, Clifton Fuller, Rebati Raman Gaire, Adrian Galdran, Álvaro García Faura, Maria Grammatikopoulou, SeulGi Hong, Mostafa Jahanifar, Ikbeom Jang, Abdolrahim Kadkhodamohammadi, Inha Kang, Florian Kofler, Satoshi Kondo, Hugo Kuijf, Mingxing Li, Minh Huan Luu, Tomaž Martinčič, Pedro Morais, Mohamed A. Naser, Bruno Oliveira, David Owen, Subeen Pang, Jinah Park, Sung-Hong Park, Szymon Płotka, Elodie Puybareau, Nasir Rajpoot, Kanghyun Ryu, Numan Saeed, Adam Shephard, Pengcheng Shi, Dejan Štepec, Ronast Subedi, Guillaume Tochon, Helena R. Torres, Helene Urien, João L. Vilaça, Kareem Abdul Wahid, Haojie Wang, Jiacheng Wang, Liansheng Wang, Xiyue Wang, Benedikt Wiestler, Marek Wodzinski, Fangfang Xia, Juanying Xie, Zhiwei Xiong, Sen Yang, Yanwu Yang, Zixuan Zhao, Klaus Maier-Hein, Paul F. Jäger, Annette Kopp-Schneider, Lena Maier-Hein

Figure 1 for Why is the winner the best?
Figure 2 for Why is the winner the best?
Figure 3 for Why is the winner the best?
Figure 4 for Why is the winner the best?

International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multi-center study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and postprocessing (66%). The "typical" lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work.

* accepted to CVPR 2023 
Viaarxiv icon

Joint Embedding of 2D and 3D Networks for Medical Image Anomaly Detection

Dec 23, 2022
Inha Kang, Jinah Park

Figure 1 for Joint Embedding of 2D and 3D Networks for Medical Image Anomaly Detection
Figure 2 for Joint Embedding of 2D and 3D Networks for Medical Image Anomaly Detection
Figure 3 for Joint Embedding of 2D and 3D Networks for Medical Image Anomaly Detection
Figure 4 for Joint Embedding of 2D and 3D Networks for Medical Image Anomaly Detection

Obtaining ground truth data in medical imaging has difficulties due to the fact that it requires a lot of annotating time from the experts in the field. Also, when trained with supervised learning, it detects only the cases included in the labels. In real practice, we want to also open to other possibilities than the named cases while examining the medical images. As a solution, the need for anomaly detection that can detect and localize abnormalities by learning the normal characteristics using only normal images is emerging. With medical image data, we can design either 2D or 3D networks of self-supervised learning for anomaly detection task. Although 3D networks, which learns 3D structures of the human body, show good performance in 3D medical image anomaly detection, they cannot be stacked in deeper layers due to memory problems. While 2D networks have advantage in feature detection, they lack 3D context information. In this paper, we develop a method for combining the strength of the 3D network and the strength of the 2D network through joint embedding. We also propose the pretask of self-supervised learning to make it possible for the networks to learn efficiently. Through the experiments, we show that the proposed method achieves better performance in both classification and segmentation tasks compared to the SoTA method.

* 14 pages, 11 figures 
Viaarxiv icon

WG-VITON: Wearing-Guide Virtual Try-On for Top and Bottom Clothes

May 10, 2022
Soonchan Park, Jinah Park

Figure 1 for WG-VITON: Wearing-Guide Virtual Try-On for Top and Bottom Clothes
Figure 2 for WG-VITON: Wearing-Guide Virtual Try-On for Top and Bottom Clothes
Figure 3 for WG-VITON: Wearing-Guide Virtual Try-On for Top and Bottom Clothes
Figure 4 for WG-VITON: Wearing-Guide Virtual Try-On for Top and Bottom Clothes

Studies of virtual try-on (VITON) have been shown their effectiveness in utilizing the generative neural network for virtually exploring fashion products, and some of recent researches of VITON attempted to synthesize human image wearing given multiple types of garments (e.g., top and bottom clothes). However, when replacing the top and bottom clothes of the target human, numerous wearing styles are possible with a certain combination of the clothes. In this paper, we address the problem of variation in wearing style when simultaneously replacing the top and bottom clothes of the model. We introduce Wearing-Guide VITON (i.e., WG-VITON) which utilizes an additional input binary mask to control the wearing styles of the generated image. Our experiments show that WG-VITON effectively generates an image of the model wearing given top and bottom clothes, and create complicated wearing styles such as partly tucking in the top to the bottom

* 5 pages 
Viaarxiv icon

Mitosis domain generalization in histopathology images -- The MIDOG challenge

Apr 06, 2022
Marc Aubreville, Nikolas Stathonikos, Christof A. Bertram, Robert Klopleisch, Natalie ter Hoeve, Francesco Ciompi, Frauke Wilm, Christian Marzahl, Taryn A. Donovan, Andreas Maier, Jack Breen, Nishant Ravikumar, Youjin Chung, Jinah Park, Ramin Nateghi, Fattaneh Pourakpour, Rutger H. J. Fick, Saima Ben Hadj, Mostafa Jahanifar, Nasir Rajpoot, Jakob Dexl, Thomas Wittenberg, Satoshi Kondo, Maxime W. Lafarge, Viktor H. Koelzer, Jingtang Liang, Yubo Wang, Xi Long, Jingxin Liu, Salar Razavi, April Khademi, Sen Yang, Xiyue Wang, Mitko Veta, Katharina Breininger

Figure 1 for Mitosis domain generalization in histopathology images -- The MIDOG challenge
Figure 2 for Mitosis domain generalization in histopathology images -- The MIDOG challenge
Figure 3 for Mitosis domain generalization in histopathology images -- The MIDOG challenge
Figure 4 for Mitosis domain generalization in histopathology images -- The MIDOG challenge

The density of mitotic figures within tumor tissue is known to be highly correlated with tumor proliferation and thus is an important marker in tumor grading. Recognition of mitotic figures by pathologists is known to be subject to a strong inter-rater bias, which limits the prognostic value. State-of-the-art deep learning methods can support the expert in this assessment but are known to strongly deteriorate when applied in a different clinical environment than was used for training. One decisive component in the underlying domain shift has been identified as the variability caused by using different whole slide scanners. The goal of the MICCAI MIDOG 2021 challenge has been to propose and evaluate methods that counter this domain shift and derive scanner-agnostic mitosis detection algorithms. The challenge used a training set of 200 cases, split across four scanning systems. As a test set, an additional 100 cases split across four scanning systems, including two previously unseen scanners, were given. The best approaches performed on an expert level, with the winning algorithm yielding an F_1 score of 0.748 (CI95: 0.704-0.781). In this paper, we evaluate and compare the approaches that were submitted to the challenge and identify methodological factors contributing to better performance.

* 19 pages, 9 figures, summary paper of the 2021 MICCAI MIDOG challenge 
Viaarxiv icon

Domain-Robust Mitotic Figure Detection with Style Transfer

Sep 30, 2021
Youjin Chung, Jihoon Cho, Jinah Park

Figure 1 for Domain-Robust Mitotic Figure Detection with Style Transfer
Figure 2 for Domain-Robust Mitotic Figure Detection with Style Transfer
Figure 3 for Domain-Robust Mitotic Figure Detection with Style Transfer

We propose a new training scheme for domain generalization in mitotic figure detection. Mitotic figures show different characteristics for each scanner. We consider each scanner as a 'domain' and the image distribution specified for each domain as 'style'. The goal is to train our network to be robust on scanner types by using various 'style' images. To expand the style variance, we transfer a style of the training image into arbitrary styles, by defining a module based on StarGAN. Our model with the proposed training scheme shows positive performance on MIDOG Preliminary Test-Set containing scanners never seen before.

* 2 pages, 3 figures 
Viaarxiv icon

Domain-Robust Mitotic Figure Detection with StyleGAN

Sep 02, 2021
Youjin Chung, Jihoon Cho, Jinah Park

Figure 1 for Domain-Robust Mitotic Figure Detection with StyleGAN
Figure 2 for Domain-Robust Mitotic Figure Detection with StyleGAN
Figure 3 for Domain-Robust Mitotic Figure Detection with StyleGAN

We propose a new training scheme for domain generalization in mitotic figure detection. By considering the image variance due to different scanner types as different image styles, we have trained our detection network to be robust on scanner types. To expand the image variance, domain of training image is transferred into arbitrary domain. The proposed style transfer module generates different styled images from an input image with random code, eventually generating variously styled images. Our model with the proposed training scheme shows good performance on MIDOG Preliminary Test-Set containing scanners never seen before.

* 2 pages, 3 figures 
Viaarxiv icon