Alert button
Picture for Andreas Maier

Andreas Maier

Alert button

SniffyArt: The Dataset of Smelling Persons

Nov 20, 2023
Mathias Zinnen, Azhar Hussian, Hang Tran, Prathmesh Madhu, Andreas Maier, Vincent Christlein

Smell gestures play a crucial role in the investigation of past smells in the visual arts yet their automated recognition poses significant challenges. This paper introduces the SniffyArt dataset, consisting of 1941 individuals represented in 441 historical artworks. Each person is annotated with a tightly fitting bounding box, 17 pose keypoints, and a gesture label. By integrating these annotations, the dataset enables the development of hybrid classification approaches for smell gesture recognition. The datasets high-quality human pose estimation keypoints are achieved through the merging of five separate sets of keypoint annotations per person. The paper also presents a baseline analysis, evaluating the performance of representative algorithms for detection, keypoint estimation, and classification tasks, showcasing the potential of combining keypoint estimation with smell gesture classification. The SniffyArt dataset lays a solid foundation for future research and the exploration of multi-task approaches leveraging pose keypoints and person boxes to advance human gesture and olfactory dimension analysis in historical artworks.

* Proceedings of the 5th Workshop on analySis, Understanding and proMotion of heritAge Contents. 2023. S. 49-58  
* 10 pages, 8 figures 
Viaarxiv icon

A Survey of Incremental Transfer Learning: Combining Peer-to-Peer Federated Learning and Domain Incremental Learning for Multicenter Collaboration

Sep 29, 2023
Yixing Huang, Christoph Bert, Ahmed Gomaa, Rainer Fietkau, Andreas Maier, Florian Putz

Due to data privacy constraints, data sharing among multiple clinical centers is restricted, which impedes the development of high performance deep learning models from multicenter collaboration. Naive weight transfer methods share intermediate model weights without raw data and hence can bypass data privacy restrictions. However, performance drops are typically observed when the model is transferred from one center to the next because of the forgetting problem. Incremental transfer learning, which combines peer-to-peer federated learning and domain incremental learning, can overcome the data privacy issue and meanwhile preserve model performance by using continual learning techniques. In this work, a conventional domain/task incremental learning framework is adapted for incremental transfer learning. A comprehensive survey on the efficacy of different regularization-based continual learning methods for multicenter collaboration is performed. The influences of data heterogeneity, classifier head setting, network optimizer, model initialization, center order, and weight transfer type have been investigated thoroughly. Our framework is publicly accessible to the research community for further development.

Viaarxiv icon

Conceptual Cognitive Maps Formation with Neural Successor Networks and Word Embeddings

Jul 04, 2023
Paul Stoewer, Achim Schilling, Andreas Maier, Patrick Krauss

Figure 1 for Conceptual Cognitive Maps Formation with Neural Successor Networks and Word Embeddings
Figure 2 for Conceptual Cognitive Maps Formation with Neural Successor Networks and Word Embeddings
Figure 3 for Conceptual Cognitive Maps Formation with Neural Successor Networks and Word Embeddings
Figure 4 for Conceptual Cognitive Maps Formation with Neural Successor Networks and Word Embeddings

The human brain possesses the extraordinary capability to contextualize the information it receives from our environment. The entorhinal-hippocampal plays a critical role in this function, as it is deeply engaged in memory processing and constructing cognitive maps using place and grid cells. Comprehending and leveraging this ability could significantly augment the field of artificial intelligence. The multi-scale successor representation serves as a good model for the functionality of place and grid cells and has already shown promise in this role. Here, we introduce a model that employs successor representations and neural networks, along with word embedding vectors, to construct a cognitive map of three separate concepts. The network adeptly learns two different scaled maps and situates new information in proximity to related pre-existing representations. The dispersion of information across the cognitive map varies according to its scale - either being heavily concentrated, resulting in the formation of the three concepts, or spread evenly throughout the map. We suggest that our model could potentially improve current AI models by providing multi-modal context information to any input, based on a similarity metric for the input and pre-existing knowledge representations.

Viaarxiv icon

Deep Learning for Cancer Prognosis Prediction Using Portrait Photos by StyleGAN Embedding

Jun 28, 2023
Amr Hagag, Ahmed Gomaa, Dominik Kornek, Andreas Maier, Rainer Fietkau, Christoph Bert, Florian Putz, Yixing Huang

Figure 1 for Deep Learning for Cancer Prognosis Prediction Using Portrait Photos by StyleGAN Embedding
Figure 2 for Deep Learning for Cancer Prognosis Prediction Using Portrait Photos by StyleGAN Embedding
Figure 3 for Deep Learning for Cancer Prognosis Prediction Using Portrait Photos by StyleGAN Embedding
Figure 4 for Deep Learning for Cancer Prognosis Prediction Using Portrait Photos by StyleGAN Embedding

Survival prediction for cancer patients is critical for optimal treatment selection and patient management. Current patient survival prediction methods typically extract survival information from patients' clinical record data or biological and imaging data. In practice, experienced clinicians can have a preliminary assessment of patients' health status based on patients' observable physical appearances, which are mainly facial features. However, such assessment is highly subjective. In this work, the efficacy of objectively capturing and using prognostic information contained in conventional portrait photographs using deep learning for survival predication purposes is investigated for the first time. A pre-trained StyleGAN2 model is fine-tuned on a custom dataset of our cancer patients' photos to empower its generator with generative ability suitable for patients' photos. The StyleGAN2 is then used to embed the photographs to its highly expressive latent space. Utilizing the state-of-the-art survival analysis models and based on StyleGAN's latent space photo embeddings, this approach achieved a C-index of 0.677, which is notably higher than chance and evidencing the prognostic value embedded in simple 2D facial images. In addition, thanks to StyleGAN's interpretable latent space, our survival prediction model can be validated for relying on essential facial features, eliminating any biases from extraneous information like clothing or background. Moreover, a health attribute is obtained from regression coefficients, which has important potential value for patient care.

Viaarxiv icon

A Vessel-Segmentation-Based CycleGAN for Unpaired Multi-modal Retinal Image Synthesis

Jun 05, 2023
Aline Sindel, Andreas Maier, Vincent Christlein

Unpaired image-to-image translation of retinal images can efficiently increase the training dataset for deep-learning-based multi-modal retinal registration methods. Our method integrates a vessel segmentation network into the image-to-image translation task by extending the CycleGAN framework. The segmentation network is inserted prior to a UNet vision transformer generator network and serves as a shared representation between both domains. We reformulate the original identity loss to learn the direct mapping between the vessel segmentation and the real image. Additionally, we add a segmentation loss term to ensure shared vessel locations between fake and real images. In the experiments, our method shows a visually realistic look and preserves the vessel structures, which is a prerequisite for generating multi-modal training data for image registration.

* BVM 2023  
* Accepted to BVM 2023 
Viaarxiv icon

Federated learning for secure development of AI models for Parkinson's disease detection using speech from different languages

May 18, 2023
Soroosh Tayebi Arasteh, Cristian David Rios-Urrego, Elmar Noeth, Andreas Maier, Seung Hee Yang, Jan Rusz, Juan Rafael Orozco-Arroyave

Figure 1 for Federated learning for secure development of AI models for Parkinson's disease detection using speech from different languages
Figure 2 for Federated learning for secure development of AI models for Parkinson's disease detection using speech from different languages
Figure 3 for Federated learning for secure development of AI models for Parkinson's disease detection using speech from different languages
Figure 4 for Federated learning for secure development of AI models for Parkinson's disease detection using speech from different languages

Parkinson's disease (PD) is a neurological disorder impacting a person's speech. Among automatic PD assessment methods, deep learning models have gained particular interest. Recently, the community has explored cross-pathology and cross-language models which can improve diagnostic accuracy even further. However, strict patient data privacy regulations largely prevent institutions from sharing patient speech data with each other. In this paper, we employ federated learning (FL) for PD detection using speech signals from 3 real-world language corpora of German, Spanish, and Czech, each from a separate institution. Our results indicate that the FL model outperforms all the local models in terms of diagnostic accuracy, while not performing very differently from the model based on centrally combined training sets, with the advantage of not requiring any data sharing among collaborators. This will simplify inter-institutional collaborations, resulting in enhancement of patient outcomes.

* Accepted for INTERSPEECH 2023 
Viaarxiv icon

DeepFilterNet: Perceptually Motivated Real-Time Speech Enhancement

May 14, 2023
Hendrik Schröter, Tobias Rosenkranz, Alberto N. Escalante-B., Andreas Maier

Figure 1 for DeepFilterNet: Perceptually Motivated Real-Time Speech Enhancement
Figure 2 for DeepFilterNet: Perceptually Motivated Real-Time Speech Enhancement
Figure 3 for DeepFilterNet: Perceptually Motivated Real-Time Speech Enhancement

Multi-frame algorithms for single-channel speech enhancement are able to take advantage from short-time correlations within the speech signal. Deep Filtering (DF) was proposed to directly estimate a complex filter in frequency domain to take advantage of these correlations. In this work, we present a real-time speech enhancement demo using DeepFilterNet. DeepFilterNet's efficiency is enabled by exploiting domain knowledge of speech production and psychoacoustic perception. Our model is able to match state-of-the-art speech enhancement benchmarks while achieving a real-time-factor of 0.19 on a single threaded notebook CPU. The framework as well as pretrained weights have been published under an open source license.

* Accepted as show and tell demo to interspeech 2023 
Viaarxiv icon

Deep Multi-Frame Filtering for Hearing Aids

May 14, 2023
Hendrik Schröter, Tobias Rosenkranz, Alberto N. Escalante-B., Andreas Maier

Figure 1 for Deep Multi-Frame Filtering for Hearing Aids
Figure 2 for Deep Multi-Frame Filtering for Hearing Aids
Figure 3 for Deep Multi-Frame Filtering for Hearing Aids
Figure 4 for Deep Multi-Frame Filtering for Hearing Aids

Multi-frame algorithms for single-channel speech enhancement are able to take advantage from short-time correlations within the speech signal. Deep filtering (DF) recently demonstrated its capabilities for low-latency scenarios like hearing aids with its complex multi-frame (MF) filter. Alternatively, the complex filter can be estimated via an MF minimum variance distortionless response (MVDR), or MF Wiener filter (WF). Previous studies have shown that incorporating algorithm domain knowledge using an MVDR filter might be beneficial compared to the direct filter estimation via DF. In this work, we compare the usage of various multi-frame filters such as DF, MF-MVDR, or MF-WF for HAs. We assess different covariance estimation methods for both MF-MVDR and MF-WF and objectively demonstrate an improved performance compared to direct DF estimation, significantly outperforming related work while improving the runtime performance.

* Submitted to Interspeech 2023 
Viaarxiv icon

Joint MR sequence optimization beats pure neural network approaches for spin-echo MRI super-resolution

May 12, 2023
Hoai Nam Dang, Vladimir Golkov, Thomas Wimmer, Daniel Cremers, Andreas Maier, Moritz Zaiss

Figure 1 for Joint MR sequence optimization beats pure neural network approaches for spin-echo MRI super-resolution
Figure 2 for Joint MR sequence optimization beats pure neural network approaches for spin-echo MRI super-resolution
Figure 3 for Joint MR sequence optimization beats pure neural network approaches for spin-echo MRI super-resolution
Figure 4 for Joint MR sequence optimization beats pure neural network approaches for spin-echo MRI super-resolution

Current MRI super-resolution (SR) methods only use existing contrasts acquired from typical clinical sequences as input for the neural network (NN). In turbo spin echo sequences (TSE) the sequence parameters can have a strong influence on the actual resolution of the acquired image and have consequently a considera-ble impact on the performance of the NN. We propose a known-operator learning approach to perform an end-to-end optimization of MR sequence and neural net-work parameters for SR-TSE. This MR-physics-informed training procedure jointly optimizes the radiofrequency pulse train of a proton density- (PD-) and T2-weighted TSE and a subsequently applied convolutional neural network to predict the corresponding PDw and T2w super-resolution TSE images. The found radiofrequency pulse train designs generate an optimal signal for the NN to perform the SR task. Our method generalizes from the simulation-based optimi-zation to in vivo measurements and the acquired physics-informed SR images show higher correlation with a time-consuming segmented high-resolution TSE sequence compared to a pure network training approach.

* 13 pages, 4 figures, 3 tables, submitted to MICCAI 2023 for review 
Viaarxiv icon

Building a Non-native Speech Corpus Featuring Chinese-English Bilingual Children: Compilation and Rationale

Apr 30, 2023
Hiuchung Hung, Andreas Maier, Thorsten Piske

Figure 1 for Building a Non-native Speech Corpus Featuring Chinese-English Bilingual Children: Compilation and Rationale
Figure 2 for Building a Non-native Speech Corpus Featuring Chinese-English Bilingual Children: Compilation and Rationale

This paper introduces a non-native speech corpus consisting of narratives from fifty 5- to 6-year-old Chinese-English children. Transcripts totaling 6.5 hours of children taking a narrative comprehension test in English (L2) are presented, along with human-rated scores and annotations of grammatical and pronunciation errors. The children also completed the parallel MAIN tests in Chinese (L1) for reference purposes. For all tests we recorded audio and video with our innovative self-developed remote collection methods. The video recordings serve to mitigate the challenge of low intelligibility in L2 narratives produced by young children during the transcription process. This corpus offers valuable resources for second language teaching and has the potential to enhance the overall performance of automatic speech recognition (ASR).

Viaarxiv icon