Alert button
Picture for Martin Weigert

Martin Weigert

Alert button

Self-supervised dense representation learning for live-cell microscopy with time arrow prediction

May 09, 2023
Benjamin Gallusser, Max Stieber, Martin Weigert

Figure 1 for Self-supervised dense representation learning for live-cell microscopy with time arrow prediction
Figure 2 for Self-supervised dense representation learning for live-cell microscopy with time arrow prediction
Figure 3 for Self-supervised dense representation learning for live-cell microscopy with time arrow prediction
Figure 4 for Self-supervised dense representation learning for live-cell microscopy with time arrow prediction

State-of-the-art object detection and segmentation methods for microscopy images rely on supervised machine learning, which requires laborious manual annotation of training data. Here we present a self-supervised method based on time arrow prediction pre-training that learns dense image representations from raw, unlabeled live-cell microscopy videos. Our method builds upon the task of predicting the correct order of time-flipped image regions via a single-image feature extractor and a subsequent time arrow prediction head. We show that the resulting dense representations capture inherently time-asymmetric biological processes such as cell divisions on a pixel-level. We furthermore demonstrate the utility of these representations on several live-cell microscopy datasets for detection and segmentation of dividing cells, as well as for cell state classification. Our method outperforms supervised methods, particularly when only limited ground truth annotations are available as is commonly the case in practice. We provide code at https://github.com/weigertlab/tarrow.

Viaarxiv icon

CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting

Mar 14, 2023
Simon Graham, Quoc Dang Vu, Mostafa Jahanifar, Martin Weigert, Uwe Schmidt, Wenhua Zhang, Jun Zhang, Sen Yang, Jinxi Xiang, Xiyue Wang, Josef Lorenz Rumberger, Elias Baumann, Peter Hirsch, Lihao Liu, Chenyang Hong, Angelica I. Aviles-Rivero, Ayushi Jain, Heeyoung Ahn, Yiyu Hong, Hussam Azzuni, Min Xu, Mohammad Yaqub, Marie-Claire Blache, Benoît Piégu, Bertrand Vernay, Tim Scherr, Moritz Böhland, Katharina Löffler, Jiachen Li, Weiqin Ying, Chixin Wang, Dagmar Kainmueller, Carola-Bibiane Schönlieb, Shuolin Liu, Dhairya Talsania, Yughender Meda, Prakash Mishra, Muhammad Ridzuan, Oliver Neumann, Marcel P. Schilling, Markus Reischl, Ralf Mikut, Banban Huang, Hsiang-Chin Chien, Ching-Ping Wang, Chia-Yen Lee, Hong-Kun Lin, Zaiyi Liu, Xipeng Pan, Chu Han, Jijun Cheng, Muhammad Dawood, Srijay Deshpande, Raja Muhammad Saad Bashir, Adam Shephard, Pedro Costa, João D. Nunes, Aurélio Campilho, Jaime S. Cardoso, Hrishikesh P S, Densen Puthussery, Devika R G, Jiji C V, Ye Zhang, Zijie Fang, Zhifan Lin, Yongbing Zhang, Chunhui Lin, Liukun Zhang, Lijian Mao, Min Wu, Vi Thi-Tuong Vo, Soo-Hyung Kim, Taebum Lee, Satoshi Kondo, Satoshi Kasai, Pranay Dumbhare, Vedant Phuse, Yash Dubey, Ankush Jamthikar, Trinh Thi Le Vuong, Jin Tae Kwak, Dorsa Ziaei, Hyun Jung, Tianyi Miao, David Snead, Shan E Ahmed Raza, Fayyaz Minhas, Nasir M. Rajpoot

Figure 1 for CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting
Figure 2 for CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting
Figure 3 for CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting
Figure 4 for CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting

Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.

Viaarxiv icon

Organelle-specific segmentation, spatial analysis, and visualization of volume electron microscopy datasets

Mar 07, 2023
Andreas Müller, Deborah Schmidt, Lucas Rieckert, Michele Solimena, Martin Weigert

Figure 1 for Organelle-specific segmentation, spatial analysis, and visualization of volume electron microscopy datasets
Figure 2 for Organelle-specific segmentation, spatial analysis, and visualization of volume electron microscopy datasets
Figure 3 for Organelle-specific segmentation, spatial analysis, and visualization of volume electron microscopy datasets
Figure 4 for Organelle-specific segmentation, spatial analysis, and visualization of volume electron microscopy datasets

Volume electron microscopy is the method of choice for the in-situ interrogation of cellular ultrastructure at the nanometer scale. Recent technical advances have led to a rapid increase in large raw image datasets that require computational strategies for segmentation and spatial analysis. In this protocol, we describe a practical and annotation-efficient pipeline for organelle-specific segmentation, spatial analysis, and visualization of large volume electron microscopy datasets using freely available, user-friendly software tools that can be run on a single standard workstation. We specifically target researchers in the life sciences with limited computational expertise, who face the following tasks within their volume electron microscopy projects: i) How to generate 3D segmentation labels for different types of cell organelles while minimizing manual annotation efforts, ii) how to analyze the spatial interactions between organelle instances, and iii) how to best visualize the 3D segmentation results. To meet these demands we give detailed guidelines for choosing the most efficient segmentation tools for the specific cell organelle. We furthermore provide easily executable components for spatial analysis and 3D rendering and bridge compatibility issues between freely available open-source tools, such that others can replicate our full pipeline starting from a raw dataset up to the final plots and rendered images. We believe that our detailed description can serve as a valuable reference for similar projects requiring special strategies for single- or multiple organelle analysis which can be achieved with computational resources commonly available to single-user setups.

Viaarxiv icon

Nuclei instance segmentation and classification in histopathology images with StarDist

Mar 24, 2022
Martin Weigert, Uwe Schmidt

Figure 1 for Nuclei instance segmentation and classification in histopathology images with StarDist
Figure 2 for Nuclei instance segmentation and classification in histopathology images with StarDist

Instance segmentation and classification of nuclei is an important task in computational pathology. We show that StarDist, a deep learning based nuclei segmentation method originally developed for fluorescence microscopy, can be extended and successfully applied to histopathology images. This is substantiated by conducting experiments on the Lizard dataset, and through entering the Colon Nuclei Identification and Counting (CoNIC) challenge 2022. At the end of the preliminary test phase of CoNIC, our approach ranked first on the leaderboard for the segmentation and classification task.

Viaarxiv icon

Nuclei segmentation and classification in histopathology images with StarDist for the CoNIC Challenge 2022

Mar 03, 2022
Martin Weigert, Uwe Schmidt

Figure 1 for Nuclei segmentation and classification in histopathology images with StarDist for the CoNIC Challenge 2022
Figure 2 for Nuclei segmentation and classification in histopathology images with StarDist for the CoNIC Challenge 2022

Segmentation and classification of nuclei in histopathology images is an important task in computational pathology. Here we describe how we used StarDist, a deep learning based approach based on star-convex shape representations, for the Colon Nuclei Identification and Counting (CoNIC) challenge 2022.

Viaarxiv icon

Fast and accurate aberration estimation from 3D bead images using convolutional neural networks

Jun 02, 2020
Debayan Saha, Uwe Schmidt, Qinrong Zhang, Aurelien Barbotin, Qi Hu, Na Ji, Martin J. Booth, Martin Weigert, Eugene W. Myers

Figure 1 for Fast and accurate aberration estimation from 3D bead images using convolutional neural networks
Figure 2 for Fast and accurate aberration estimation from 3D bead images using convolutional neural networks
Figure 3 for Fast and accurate aberration estimation from 3D bead images using convolutional neural networks
Figure 4 for Fast and accurate aberration estimation from 3D bead images using convolutional neural networks

Estimating optical aberrations from volumetric intensity images is a key step in sensorless adaptive optics for microscopy. Here we describe a method (PHASENET) for fast and accurate aberration measurement from experimentally acquired 3D bead images using convolutional neural networks. Importantly, we show that networks trained only on synthetically generated data can successfully predict aberrations from experimental images. We demonstrate our approach on two data sets acquired with different microscopy modalities and find that PHASENET yields results better than or comparable to classical methods while being orders of magnitude faster. We furthermore show that the number of focal planes required for satisfactory prediction is related to different symmetry groups of Zernike modes. PHASENET is freely available as open-source software in Python.

Viaarxiv icon

An interpretable automated detection system for FISH-based HER2 oncogene amplification testing in histo-pathological routine images of breast and gastric cancer diagnostics

May 25, 2020
Sarah Schmell, Falk Zakrzewski, Walter de Back, Martin Weigert, Uwe Schmidt, Torsten Wenke, Silke Zeugner, Robert Mantey, Christian Sperling, Ingo Roeder, Pia Hoenscheid, Daniela Aust, Gustavo Baretton

Figure 1 for An interpretable automated detection system for FISH-based HER2 oncogene amplification testing in histo-pathological routine images of breast and gastric cancer diagnostics

Histo-pathological diagnostics are an inherent part of the everyday work but are particularly laborious and associated with time-consuming manual analysis of image data. In order to cope with the increasing diagnostic case numbers due to the current growth and demographic change of the global population and the progress in personalized medicine, pathologists ask for assistance. Profiting from digital pathology and the use of artificial intelligence, individual solutions can be offered (e.g. detect labeled cancer tissue sections). The testing of the human epidermal growth factor receptor 2 (HER2) oncogene amplification status via fluorescence in situ hybridization (FISH) is recommended for breast and gastric cancer diagnostics and is regularly performed at clinics. Here, we develop an interpretable, deep learning (DL)-based pipeline which automates the evaluation of FISH images with respect to HER2 gene amplification testing. It mimics the pathological assessment and relies on the detection and localization of interphase nuclei based on instance segmentation networks. Furthermore, it localizes and classifies fluorescence signals within each nucleus with the help of image classification and object detection convolutional neural networks (CNNs). Finally, the pipeline classifies the whole image regarding its HER2 amplification status. The visualization of pixels on which the networks' decision occurs, complements an essential part to enable interpretability by pathologists.

Viaarxiv icon

Star-convex Polyhedra for 3D Object Detection and Segmentation in Microscopy

Aug 09, 2019
Martin Weigert, Uwe Schmidt, Robert Haase, Ko Sugawara, Gene Myers

Figure 1 for Star-convex Polyhedra for 3D Object Detection and Segmentation in Microscopy
Figure 2 for Star-convex Polyhedra for 3D Object Detection and Segmentation in Microscopy
Figure 3 for Star-convex Polyhedra for 3D Object Detection and Segmentation in Microscopy
Figure 4 for Star-convex Polyhedra for 3D Object Detection and Segmentation in Microscopy

Accurate detection and segmentation of cell nuclei in volumetric (3D) fluorescence microscopy datasets is an important step in many biomedical research projects. Although many automated methods for these tasks exist, they often struggle for images with low signal-to-noise ratios and/or dense packing of nuclei. It was recently shown for 2D microscopy images that these issues can be alleviated by training a neural network to directly predict a suitable shape representation (star-convex polygon) for cell nuclei. In this paper, we adopt and extend this approach to 3D volumes by using star-convex polyhedra to represent cell nuclei and similar shapes. To that end, we overcome the challenges of 1) finding parameter-efficient star-convex polyhedra representations that can faithfully describe cell nuclei shapes, 2) adapting to anisotropic voxel sizes often found in fluorescence microscopy datasets, and 3) efficiently computing intersections between pairs of star-convex polyhedra (required for non-maximum suppression). Although our approach is quite general, since star-convex polyhedra subsume common shapes like bounding boxes and spheres as special cases, our focus is on accurate detection and segmentation of cell nuclei. That that end, we demonstrate on two challenging datasets that our approach (StarDist-3D) leads to superior results when compared to classical and deep-learning based methods.

Viaarxiv icon

Cell Detection with Star-convex Polygons

Jun 09, 2018
Uwe Schmidt, Martin Weigert, Coleman Broaddus, Gene Myers

Figure 1 for Cell Detection with Star-convex Polygons
Figure 2 for Cell Detection with Star-convex Polygons
Figure 3 for Cell Detection with Star-convex Polygons
Figure 4 for Cell Detection with Star-convex Polygons

Automatic detection and segmentation of cells and nuclei in microscopy images is important for many biological applications. Recent successful learning-based approaches include per-pixel cell segmentation with subsequent pixel grouping, or localization of bounding boxes with subsequent shape refinement. In situations of crowded cells, these can be prone to segmentation errors, such as falsely merging bordering cells or suppressing valid cell instances due to the poor approximation with bounding boxes. To overcome these issues, we propose to localize cell nuclei via star-convex polygons, which are a much better shape representation as compared to bounding boxes and thus do not need shape refinement. To that end, we train a convolutional neural network that predicts for every pixel a polygon for the cell instance at that position. We demonstrate the merits of our approach on two synthetic datasets and one challenging dataset of diverse fluorescence microscopy images.

* Accepted at MICCAI 2018 
Viaarxiv icon

Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks

Apr 05, 2017
Martin Weigert, Loic Royer, Florian Jug, Gene Myers

Figure 1 for Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks
Figure 2 for Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks
Figure 3 for Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks
Figure 4 for Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks

Fluorescence microscopy images usually show severe anisotropy in axial versus lateral resolution. This hampers downstream processing, i.e. the automatic extraction of quantitative biological data. While deconvolution methods and other techniques to address this problem exist, they are either time consuming to apply or limited in their ability to remove anisotropy. We propose a method to recover isotropic resolution from readily acquired anisotropic data. We achieve this using a convolutional neural network that is trained end-to-end from the same anisotropic body of data we later apply the network to. The network effectively learns to restore the full isotropic resolution by restoring the image under a trained, sample specific image prior. We apply our method to $3$ synthetic and $3$ real datasets and show that our results improve on results from deconvolution and state-of-the-art super-resolution techniques. Finally, we demonstrate that a standard 3D segmentation pipeline performs on the output of our network with comparable accuracy as on the full isotropic data.

Viaarxiv icon