Alert button
Picture for Oliver Neumann

Oliver Neumann

Alert button

Transformer Training Strategies for Forecasting Multiple Load Time Series

Jun 19, 2023
Matthias Hertel, Maximilian Beichter, Benedikt Heidrich, Oliver Neumann, Benjamin Schäfer, Ralf Mikut, Veit Hagenmeyer

Figure 1 for Transformer Training Strategies for Forecasting Multiple Load Time Series
Figure 2 for Transformer Training Strategies for Forecasting Multiple Load Time Series
Figure 3 for Transformer Training Strategies for Forecasting Multiple Load Time Series
Figure 4 for Transformer Training Strategies for Forecasting Multiple Load Time Series

Recent work uses Transformers for load forecasting, which are the state of the art for sequence modeling tasks in data-rich domains. In the smart grid of the future, accurate load forecasts must be provided on the level of individual clients of an energy supplier. While the total amount of electrical load data available to an energy supplier will increase with the ongoing smart meter rollout, the amount of data per client will always be limited. We test whether the Transformer benefits from a transfer learning strategy, where a global model is trained on the load time series data from multiple clients. We find that the global model is superior to two other training strategies commonly used in related work: multivariate models and local models. A comparison to linear models and multi-layer perceptrons shows that Transformers are effective for electrical load forecasting when they are trained with the right strategy.

Viaarxiv icon

CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting

Mar 14, 2023
Simon Graham, Quoc Dang Vu, Mostafa Jahanifar, Martin Weigert, Uwe Schmidt, Wenhua Zhang, Jun Zhang, Sen Yang, Jinxi Xiang, Xiyue Wang, Josef Lorenz Rumberger, Elias Baumann, Peter Hirsch, Lihao Liu, Chenyang Hong, Angelica I. Aviles-Rivero, Ayushi Jain, Heeyoung Ahn, Yiyu Hong, Hussam Azzuni, Min Xu, Mohammad Yaqub, Marie-Claire Blache, Benoît Piégu, Bertrand Vernay, Tim Scherr, Moritz Böhland, Katharina Löffler, Jiachen Li, Weiqin Ying, Chixin Wang, Dagmar Kainmueller, Carola-Bibiane Schönlieb, Shuolin Liu, Dhairya Talsania, Yughender Meda, Prakash Mishra, Muhammad Ridzuan, Oliver Neumann, Marcel P. Schilling, Markus Reischl, Ralf Mikut, Banban Huang, Hsiang-Chin Chien, Ching-Ping Wang, Chia-Yen Lee, Hong-Kun Lin, Zaiyi Liu, Xipeng Pan, Chu Han, Jijun Cheng, Muhammad Dawood, Srijay Deshpande, Raja Muhammad Saad Bashir, Adam Shephard, Pedro Costa, João D. Nunes, Aurélio Campilho, Jaime S. Cardoso, Hrishikesh P S, Densen Puthussery, Devika R G, Jiji C V, Ye Zhang, Zijie Fang, Zhifan Lin, Yongbing Zhang, Chunhui Lin, Liukun Zhang, Lijian Mao, Min Wu, Vi Thi-Tuong Vo, Soo-Hyung Kim, Taebum Lee, Satoshi Kondo, Satoshi Kasai, Pranay Dumbhare, Vedant Phuse, Yash Dubey, Ankush Jamthikar, Trinh Thi Le Vuong, Jin Tae Kwak, Dorsa Ziaei, Hyun Jung, Tianyi Miao, David Snead, Shan E Ahmed Raza, Fayyaz Minhas, Nasir M. Rajpoot

Figure 1 for CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting
Figure 2 for CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting
Figure 3 for CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting
Figure 4 for CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting

Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.

Viaarxiv icon

ProbPNN: Enhancing Deep Probabilistic Forecasting with Statistical Information

Feb 06, 2023
Benedikt Heidrich, Kaleb Phipps, Oliver Neumann, Marian Turowski, Ralf Mikut, Veit Hagenmeyer

Figure 1 for ProbPNN: Enhancing Deep Probabilistic Forecasting with Statistical Information
Figure 2 for ProbPNN: Enhancing Deep Probabilistic Forecasting with Statistical Information
Figure 3 for ProbPNN: Enhancing Deep Probabilistic Forecasting with Statistical Information
Figure 4 for ProbPNN: Enhancing Deep Probabilistic Forecasting with Statistical Information

Probabilistic forecasts are essential for various downstream applications such as business development, traffic planning, and electrical grid balancing. Many of these probabilistic forecasts are performed on time series data that contain calendar-driven periodicities. However, existing probabilistic forecasting methods do not explicitly take these periodicities into account. Therefore, in the present paper, we introduce a deep learning-based method that considers these calendar-driven periodicities explicitly. The present paper, thus, has a twofold contribution: First, we apply statistical methods that use calendar-driven prior knowledge to create rolling statistics and combine them with neural networks to provide better probabilistic forecasts. Second, we benchmark ProbPNN with state-of-the-art benchmarks by comparing the achieved normalised continuous ranked probability score (nCRPS) and normalised Pinball Loss (nPL) on two data sets containing in total more than 1000 time series. The results of the benchmarks show that using statistical forecasting components improves the probabilistic forecast performance and that ProbPNN outperforms other deep learning forecasting methods whilst requiring less computation costs.

Viaarxiv icon

EasyMLServe: Easy Deployment of REST Machine Learning Services

Nov 26, 2022
Oliver Neumann, Marcel Schilling, Markus Reischl, Ralf Mikut

Figure 1 for EasyMLServe: Easy Deployment of REST Machine Learning Services
Figure 2 for EasyMLServe: Easy Deployment of REST Machine Learning Services
Figure 3 for EasyMLServe: Easy Deployment of REST Machine Learning Services
Figure 4 for EasyMLServe: Easy Deployment of REST Machine Learning Services

Various research domains use machine learning approaches because they can solve complex tasks by learning from data. Deploying machine learning models, however, is not trivial and developers have to implement complete solutions which are often installed locally and include Graphical User Interfaces (GUIs). Distributing software to various users on-site has several problems. Therefore, we propose a concept to deploy software in the cloud. There are several frameworks available based on Representational State Transfer (REST) which can be used to implement cloud-based machine learning services. However, machine learning services for scientific users have special requirements that state-of-the-art REST frameworks do not cover completely. We contribute an EasyMLServe software framework to deploy machine learning services in the cloud using REST interfaces and generic local or web-based GUIs. Furthermore, we apply our framework on two real-world applications, \ie, energy time-series forecasting and cell instance segmentation. The EasyMLServe framework and the use cases are available on GitHub.

* Schulte, H. Proceedings - 32. Workshop Computational Intelligence: Berlin, 1. - 2. Dezember 2022. KIT Scientific Publishing, 2022  
Viaarxiv icon

ciscNet -- A Single-Branch Cell Instance Segmentation and Classification Network

Feb 25, 2022
Moritz Böhland, Oliver Neumann, Marcel P. Schilling, Markus Reischl, Ralf Mikut, Katharina Löffler, Tim Scherr

Figure 1 for ciscNet -- A Single-Branch Cell Instance Segmentation and Classification Network
Figure 2 for ciscNet -- A Single-Branch Cell Instance Segmentation and Classification Network
Figure 3 for ciscNet -- A Single-Branch Cell Instance Segmentation and Classification Network

Automated cell nucleus segmentation and classification are required to assist pathologists in their decision making. The Colon Nuclei Identification and Counting Challenge 2022 (CoNIC Challenge 2022) supports the development and comparability of segmentation and classification methods for histopathological images. In this contribution, we describe our CoNIC Challenge 2022 method ciscNet to segment, classify and count cell nuclei, and report preliminary evaluation results. Our code is available at https://git.scc.kit.edu/ciscnet/ciscnet-conic-2022.

* CoNIC Challenge 2022 submission 
Viaarxiv icon

Smart Data Representations: Impact on the Accuracy of Deep Neural Networks

Nov 17, 2021
Oliver Neumann, Nicole Ludwig, Marian Turowski, Benedikt Heidrich, Veit Hagenmeyer, Ralf Mikut

Figure 1 for Smart Data Representations: Impact on the Accuracy of Deep Neural Networks
Figure 2 for Smart Data Representations: Impact on the Accuracy of Deep Neural Networks
Figure 3 for Smart Data Representations: Impact on the Accuracy of Deep Neural Networks
Figure 4 for Smart Data Representations: Impact on the Accuracy of Deep Neural Networks

Deep Neural Networks are able to solve many complex tasks with less engineering effort and better performance. However, these networks often use data for training and evaluation without investigating its representation, i.e.~the form of the used data. In the present paper, we analyze the impact of data representations on the performance of Deep Neural Networks using energy time series forecasting. Based on an overview of exemplary data representations, we select four exemplary data representations and evaluate them using two different Deep Neural Network architectures and three forecasting horizons on real-world energy time series. The results show that, depending on the forecast horizon, the same data representations can have a positive or negative impact on the accuracy of Deep Neural Networks.

* Schulte, H. Proceedings - 31. Workshop Computational Intelligence : Berlin, 25. - 26. November 2021. KIT Scientific Publishing, 2021  
Viaarxiv icon

pyWATTS: Python Workflow Automation Tool for Time Series

Jun 18, 2021
Benedikt Heidrich, Andreas Bartschat, Marian Turowski, Oliver Neumann, Kaleb Phipps, Stefan Meisenbacher, Kai Schmieder, Nicole Ludwig, Ralf Mikut, Veit Hagenmeyer

Figure 1 for pyWATTS: Python Workflow Automation Tool for Time Series
Figure 2 for pyWATTS: Python Workflow Automation Tool for Time Series
Figure 3 for pyWATTS: Python Workflow Automation Tool for Time Series
Figure 4 for pyWATTS: Python Workflow Automation Tool for Time Series

Time series data are fundamental for a variety of applications, ranging from financial markets to energy systems. Due to their importance, the number and complexity of tools and methods used for time series analysis is constantly increasing. However, due to unclear APIs and a lack of documentation, researchers struggle to integrate them into their research projects and replicate results. Additionally, in time series analysis there exist many repetitive tasks, which are often re-implemented for each project, unnecessarily costing time. To solve these problems we present \texttt{pyWATTS}, an open-source Python-based package that is a non-sequential workflow automation tool for the analysis of time series data. pyWATTS includes modules with clearly defined interfaces to enable seamless integration of new or existing methods, subpipelining to easily reproduce repetitive tasks, load and save functionality to simply replicate results, and native support for key Python machine learning libraries such as scikit-learn, PyTorch, and Keras.

Viaarxiv icon