Alert button
Picture for Giovanni Volpe

Giovanni Volpe

Alert button

Deep-learning-powered data analysis in plankton ecology

Sep 15, 2023
Harshith Bachimanchi, Matthew I. M. Pinder, Chloé Robert, Pierre De Wit, Jonathan Havenhand, Alexandra Kinnby, Daniel Midtvedt, Erik Selander, Giovanni Volpe

The implementation of deep learning algorithms has brought new perspectives to plankton ecology. Emerging as an alternative approach to established methods, deep learning offers objective schemes to investigate plankton organisms in diverse environments. We provide an overview of deep-learning-based methods including detection and classification of phyto- and zooplankton images, foraging and swimming behaviour analysis, and finally ecological modelling. Deep learning has the potential to speed up the analysis and reduce the human experimental bias, thus enabling data acquisition at relevant temporal and spatial scales with improved reproducibility. We also discuss shortcomings and show how deep learning architectures have evolved to mitigate imprecise readouts. Finally, we suggest opportunities where deep learning is particularly likely to catalyze plankton research. The examples are accompanied by detailed tutorials and code samples that allow readers to apply the methods described in this review to their own data.

* For the associated GitHub repository, see https://github.com/softmatterlab/Deep-learning-in-plankton-ecology 
Viaarxiv icon

Roadmap on Deep Learning for Microscopy

Mar 07, 2023
Giovanni Volpe, Carolina Wählby, Lei Tian, Michael Hecht, Artur Yakimovich, Kristina Monakhova, Laura Waller, Ivo F. Sbalzarini, Christopher A. Metzler, Mingyang Xie, Kevin Zhang, Isaac C. D. Lenton, Halina Rubinsztein-Dunlop, Daniel Brunner, Bijie Bai, Aydogan Ozcan, Daniel Midtvedt, Hao Wang, Nataša Sladoje, Joakim Lindblad, Jason T. Smith, Marien Ochoa, Margarida Barroso, Xavier Intes, Tong Qiu, Li-Yu Yu, Sixian You, Yongtao Liu, Maxim A. Ziatdinov, Sergei V. Kalinin, Arlo Sheridan, Uri Manor, Elias Nehme, Ofri Goldenberg, Yoav Shechtman, Henrik K. Moberg, Christoph Langhammer, Barbora Špačková, Saga Helgadottir, Benjamin Midtvedt, Aykut Argun, Tobias Thalheim, Frank Cichos, Stefano Bo, Lars Hubatsch, Jesus Pineda, Carlo Manzo, Harshith Bachimanchi, Erik Selander, Antoni Homs-Corbera, Martin Fränzl, Kevin de Haan, Yair Rivenson, Zofia Korczak, Caroline Beck Adiels, Mite Mijalkov, Dániel Veréb, Yu-Wei Chang, Joana B. Pereira, Damian Matuszewski, Gustaf Kylberg, Ida-Maria Sintorn, Juan C. Caicedo, Beth A Cimini, Muyinatu A. Lediju Bell, Bruno M. Saraiva, Guillaume Jacquemet, Ricardo Henriques, Wei Ouyang, Trang Le, Estibaliz Gómez-de-Mariscal, Daniel Sage, Arrate Muñoz-Barrutia, Ebba Josefson Lindqvist, Johanna Bergman

Figure 1 for Roadmap on Deep Learning for Microscopy
Figure 2 for Roadmap on Deep Learning for Microscopy
Figure 3 for Roadmap on Deep Learning for Microscopy
Figure 4 for Roadmap on Deep Learning for Microscopy

Through digital imaging, microscopy has evolved from primarily being a means for visual observation of life at the micro- and nano-scale, to a quantitative tool with ever-increasing resolution and throughput. Artificial intelligence, deep neural networks, and machine learning are all niche terms describing computational methods that have gained a pivotal role in microscopy-based research over the past decade. This Roadmap is written collectively by prominent researchers and encompasses selected aspects of how machine learning is applied to microscopy image data, with the aim of gaining scientific knowledge by improved image quality, automated detection, segmentation, classification and tracking of objects, and efficient merging of information from multiple imaging modalities. We aim to give the reader an overview of the key developments and an understanding of possibilities and limitations of machine learning for microscopy. It will be of interest to a wide cross-disciplinary audience in the physical sciences and life sciences.

Viaarxiv icon

Preface: Characterisation of Physical Processes from Anomalous Diffusion Data

Jan 07, 2023
Carlo Manzo, Gorka Muñoz-Gil, Giovanni Volpe, Miguel Angel Garcia-March, Maciej Lewenstein, Ralf Metzler

Preface to the special issue "Characterisation of Physical Processes from Anomalous Diffusion Data" associated with the Anomalous Diffusion Challenge ( https://andi-challenge.org ) and published in Journal of Physics A: Mathematical and Theoretical. The list of articles included in the special issue can be accessed at https://iopscience.iop.org/journal/1751-8121/page/Characterisation-of-Physical-Processes-from-Anomalous-Diffusion-Data .

* Preface to the Special Issue "Characterisation of Physical Processes from Anomalous Diffusion Data", Journal of Physics A: Mathematical and Theoretical https://iopscience.iop.org/journal/1751-8121/page/Characterisation-of-Physical-Processes-from-Anomalous-Diffusion-Data 
Viaarxiv icon

Corneal endothelium assessment in specular microscopy images with Fuchs' dystrophy via deep regression of signed distance maps

Oct 13, 2022
Juan S. Sierra, Jesus Pineda, Daniela Rueda, Alejandro Tello, Angelica M. Prada, Virgilio Galvis, Giovanni Volpe, Maria S. Millan, Lenny A. Romero, Andres G. Marrugo

Figure 1 for Corneal endothelium assessment in specular microscopy images with Fuchs' dystrophy via deep regression of signed distance maps
Figure 2 for Corneal endothelium assessment in specular microscopy images with Fuchs' dystrophy via deep regression of signed distance maps
Figure 3 for Corneal endothelium assessment in specular microscopy images with Fuchs' dystrophy via deep regression of signed distance maps
Figure 4 for Corneal endothelium assessment in specular microscopy images with Fuchs' dystrophy via deep regression of signed distance maps

Specular microscopy assessment of the human corneal endothelium (CE) in Fuchs' dystrophy is challenging due to the presence of dark image regions called guttae. This paper proposes a UNet-based segmentation approach that requires minimal post-processing and achieves reliable CE morphometric assessment and guttae identification across all degrees of Fuchs' dystrophy. We cast the segmentation problem as a regression task of the cell and gutta signed distance maps instead of a pixel-level classification task as typically done with UNets. Compared to the conventional UNet classification approach, the distance-map regression approach converges faster in clinically relevant parameters. It also produces morphometric parameters that agree with the manually-segmented ground-truth data, namely the average cell density difference of -41.9 cells/mm2 (95% confidence interval (CI) [-306.2, 222.5]) and the average difference of mean cell area of 14.8 um2 (95% CI [-41.9, 71.5]). These results suggest a promising alternative for CE assessment.

Viaarxiv icon

Single-shot self-supervised particle tracking

Feb 28, 2022
Benjamin Midtvedt, Jesús Pineda, Fredrik Skärberg, Erik Olsén, Harshith Bachimanchi, Emelie Wesén, Elin K. Esbjörner, Erik Selander, Fredrik Höök, Daniel Midtvedt, Giovanni Volpe

Figure 1 for Single-shot self-supervised particle tracking
Figure 2 for Single-shot self-supervised particle tracking
Figure 3 for Single-shot self-supervised particle tracking
Figure 4 for Single-shot self-supervised particle tracking

Particle tracking is a fundamental task in digital microscopy. Recently, machine-learning approaches have made great strides in overcoming the limitations of more classical approaches. The training of state-of-the-art machine-learning methods almost universally relies on either vast amounts of labeled experimental data or the ability to numerically simulate realistic datasets. However, the data produced by experiments are often challenging to label and cannot be easily reproduced numerically. Here, we propose a novel deep-learning method, named LodeSTAR (Low-shot deep Symmetric Tracking And Regression), that learns to tracks objects with sub-pixel accuracy from a single unlabeled experimental image. This is made possible by exploiting the inherent roto-translational symmetries of the data. We demonstrate that LodeSTAR outperforms traditional methods in terms of accuracy. Furthermore, we analyze challenging experimental data containing densely packed cells or noisy backgrounds. We also exploit additional symmetries to extend the measurable particle properties to the particle's vertical position by propagating the signal in Fourier space and its polarizability by scaling the signal strength. Thanks to the ability to train deep-learning models with a single unlabeled image, LodeSTAR can accelerate the development of high-quality microscopic analysis pipelines for engineering, biology, and medicine.

* 19 pages, 4 figures 
Viaarxiv icon

Microplankton life histories revealed by holographic microscopy and deep learning

Feb 18, 2022
Harshith Bachimanchi, Benjamin Midtvedt, Daniel Midtvedt, Erik Selander, Giovanni Volpe

Figure 1 for Microplankton life histories revealed by holographic microscopy and deep learning
Figure 2 for Microplankton life histories revealed by holographic microscopy and deep learning
Figure 3 for Microplankton life histories revealed by holographic microscopy and deep learning
Figure 4 for Microplankton life histories revealed by holographic microscopy and deep learning

The marine microbial food web plays a central role in the global carbon cycle. Our mechanistic understanding of the ocean, however, is biased towards its larger constituents, while rates and biomass fluxes in the microbial food web are mainly inferred from indirect measurements and ensemble averages. Yet, resolution at the level of the individual microplankton is required to advance our understanding of the oceanic food web. Here, we demonstrate that, by combining holographic microscopy with deep learning, we can follow microplanktons throughout their lifespan, continuously measuring their three dimensional position and dry mass. The deep learning algorithms circumvent the computationally intensive processing of holographic data and allow rapid measurements over extended time periods. This permits us to reliably estimate growth rates, both in terms of dry mass increase and cell divisions, as well as to measure trophic interactions between species such as predation events. The individual resolution provides information about selectivity, individual feeding rates and handling times for individual microplanktons. This method is particularly useful to explore the flux of carbon through micro-zooplankton, the most important and least known group of primary consumers in the global oceans. We exemplify this by detailed descriptions of micro-zooplankton feeding events, cell divisions, and long term monitoring of single cells from division to division.

* 20 pages, 4 figure, 5 supplementary figure 
Viaarxiv icon

Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion

Feb 13, 2022
Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe, Carlo Manzo

Figure 1 for Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion
Figure 2 for Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion
Figure 3 for Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion
Figure 4 for Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion

The characterization of dynamical processes in living systems provides important clues for their mechanistic interpretation and link to biological functions. Thanks to recent advances in microscopy techniques, it is now possible to routinely record the motion of cells, organelles, and individual molecules at multiple spatiotemporal scales in physiological conditions. However, the automated analysis of dynamics occurring in crowded and complex environments still lags behind the acquisition of microscopic image sequences. Here, we present a framework based on geometric deep learning that achieves the accurate estimation of dynamical properties in various biologically-relevant scenarios. This deep-learning approach relies on a graph neural network enhanced by attention-based components. By processing object features with geometric priors, the network is capable of performing multiple tasks, from linking coordinates into trajectories to inferring local and global dynamic properties. We demonstrate the flexibility and reliability of this approach by applying it to real and simulated data corresponding to a broad range of biological experiments.

* 17 pages, 5 figure, 2 supplementary figures 
Viaarxiv icon

Neural Network Training with Highly Incomplete Datasets

Jul 01, 2021
Yu-Wei Chang, Laura Natali, Oveis Jamialahmadi, Stefano Romeo, Joana B. Pereira, Giovanni Volpe

Figure 1 for Neural Network Training with Highly Incomplete Datasets
Figure 2 for Neural Network Training with Highly Incomplete Datasets
Figure 3 for Neural Network Training with Highly Incomplete Datasets
Figure 4 for Neural Network Training with Highly Incomplete Datasets

Neural network training and validation rely on the availability of large high-quality datasets. However, in many cases only incomplete datasets are available, particularly in health care applications, where each patient typically undergoes different clinical procedures or can drop out of a study. Since the data to train the neural networks need to be complete, most studies discard the incomplete datapoints, which reduces the size of the training data, or impute the missing features, which can lead to artefacts. Alas, both approaches are inadequate when a large portion of the data is missing. Here, we introduce GapNet, an alternative deep-learning training approach that can use highly incomplete datasets. First, the dataset is split into subsets of samples containing all values for a certain cluster of features. Then, these subsets are used to train individual neural networks. Finally, this ensemble of neural networks is combined into a single neural network whose training is fine-tuned using all complete datapoints. Using two highly incomplete real-world medical datasets, we show that GapNet improves the identification of patients with underlying Alzheimer's disease pathology and of patients at risk of hospitalization due to Covid-19. By distilling the information available in incomplete datasets without having to reduce their size or to impute missing values, GapNet will permit to extract valuable information from a wide range of datasets, benefiting diverse fields from medicine to engineering.

* 11 pages, 3 figures, 1 table 
Viaarxiv icon

Extracting quantitative biological information from brightfield cell images using deep learning

Dec 23, 2020
Saga Helgadottir, Benjamin Midtvedt, Jesús Pineda, Alan Sabirsh, Caroline B. Adiels, Stefano Romeo, Daniel Midtvedt, Giovanni Volpe

Figure 1 for Extracting quantitative biological information from brightfield cell images using deep learning
Figure 2 for Extracting quantitative biological information from brightfield cell images using deep learning
Figure 3 for Extracting quantitative biological information from brightfield cell images using deep learning
Figure 4 for Extracting quantitative biological information from brightfield cell images using deep learning

Quantitative analysis of cell structures is essential for biomedical and pharmaceutical research. The standard imaging approach relies on fluorescence microscopy, where cell structures of interest are labeled by chemical staining techniques. However, these techniques are often invasive and sometimes even toxic to the cells, in addition to being time-consuming, labor-intensive, and expensive. Here, we introduce an alternative deep-learning-powered approach based on the analysis of brightfield images by a conditional generative adversarial neural network (cGAN). We show that this approach can extract information from the brightfield images to generate virtually-stained images, which can be used in subsequent downstream quantitative analyses of cell structures. Specifically, we train a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using brightfield images of human stem-cell-derived fat cells (adipocytes), which are of particular interest for nanomedicine and vaccine development. Subsequently, we use these virtually-stained images to extract quantitative measures about these cell structures. Generating virtually-stained fluorescence images is less invasive, less expensive, and more reproducible than standard chemical staining; furthermore, it frees up the fluorescence microscopy channels for other analytical probes, thus increasing the amount of information that can be extracted from each cell.

* 12 pages, 4 figures 
Viaarxiv icon

Improving epidemic testing and containment strategies using machine learning

Nov 23, 2020
Laura Natali, Saga Helgadottir, Onofrio M. Marago, Giovanni Volpe

Figure 1 for Improving epidemic testing and containment strategies using machine learning
Figure 2 for Improving epidemic testing and containment strategies using machine learning
Figure 3 for Improving epidemic testing and containment strategies using machine learning
Figure 4 for Improving epidemic testing and containment strategies using machine learning

Containment of epidemic outbreaks entails great societal and economic costs. Cost-effective containment strategies rely on efficiently identifying infected individuals, making the best possible use of the available testing resources. Therefore, quickly identifying the optimal testing strategy is of critical importance. Here, we demonstrate that machine learning can be used to identify which individuals are most beneficial to test, automatically and dynamically adapting the testing strategy to the characteristics of the disease outbreak. Specifically, we simulate an outbreak using the archetypal susceptible-infectious-recovered (SIR) model and we use data about the first confirmed cases to train a neural network that learns to make predictions about the rest of the population. Using these prediction, we manage to contain the outbreak more effectively and more quickly than with standard approaches. Furthermore, we demonstrate how this method can be used also when there is a possibility of reinfection (SIRS model) to efficiently eradicate an endemic disease.

* 11 pages, 4 figures 
Viaarxiv icon