Alert button
Picture for Maria Tzelepi

Maria Tzelepi

Alert button

Deep Learning for Energy Time-Series Analysis and Forecasting

Jun 29, 2023
Maria Tzelepi, Charalampos Symeonidis, Paraskevi Nousi, Efstratios Kakaletsis, Theodoros Manousis, Pavlos Tosidis, Nikos Nikolaidis, Anastasios Tefas

Figure 1 for Deep Learning for Energy Time-Series Analysis and Forecasting
Figure 2 for Deep Learning for Energy Time-Series Analysis and Forecasting
Figure 3 for Deep Learning for Energy Time-Series Analysis and Forecasting
Figure 4 for Deep Learning for Energy Time-Series Analysis and Forecasting

Energy time-series analysis describes the process of analyzing past energy observations and possibly external factors so as to predict the future. Different tasks are involved in the general field of energy time-series analysis and forecasting, with electric load demand forecasting, personalized energy consumption forecasting, as well as renewable energy generation forecasting being among the most common ones. Following the exceptional performance of Deep Learning (DL) in a broad area of vision tasks, DL models have successfully been utilized in time-series forecasting tasks. This paper aims to provide insight into various DL methods geared towards improving the performance in energy time-series forecasting tasks, with special emphasis in Greek Energy Market, and equip the reader with the necessary knowledge to apply these methods in practice.

* 13 papges, 4 figures 
Viaarxiv icon

Efficient training of lightweight neural networks using Online Self-Acquired Knowledge Distillation

Aug 26, 2021
Maria Tzelepi, Anastasios Tefas

Figure 1 for Efficient training of lightweight neural networks using Online Self-Acquired Knowledge Distillation
Figure 2 for Efficient training of lightweight neural networks using Online Self-Acquired Knowledge Distillation

Knowledge Distillation has been established as a highly promising approach for training compact and faster models by transferring knowledge from heavyweight and powerful models. However, KD in its conventional version constitutes an enduring, computationally and memory demanding process. In this paper, Online Self-Acquired Knowledge Distillation (OSAKD) is proposed, aiming to improve the performance of any deep neural model in an online manner. We utilize k-nn non-parametric density estimation technique for estimating the unknown probability distributions of the data samples in the output feature space. This allows us for directly estimating the posterior class probabilities of the data samples, and we use them as soft labels that encode explicit information about the similarities of the data with the classes, negligibly affecting the computational cost. The experimental evaluation on four datasets validates the effectiveness of proposed method.

* Accepted at ICME 2021 
Viaarxiv icon

Quadratic mutual information regularization in real-time deep CNN models

Aug 26, 2021
Maria Tzelepi, Anastasios Tefas

Figure 1 for Quadratic mutual information regularization in real-time deep CNN models
Figure 2 for Quadratic mutual information regularization in real-time deep CNN models
Figure 3 for Quadratic mutual information regularization in real-time deep CNN models
Figure 4 for Quadratic mutual information regularization in real-time deep CNN models

In this paper, regularized lightweight deep convolutional neural network models, capable of effectively operating in real-time on devices with restricted computational power for high-resolution video input are proposed. Furthermore, a novel regularization method motivated by the Quadratic Mutual Information, in order to improve the generalization ability of the utilized models is proposed. Extensive experiments on various binary classification problems involved in autonomous systems are performed, indicating the effectiveness of the proposed models as well as of the proposed regularizer.

* Accepted at MLSP 2020 
Viaarxiv icon

Semantic Scene Segmentation for Robotics Applications

Aug 25, 2021
Maria Tzelepi, Anastasios Tefas

Figure 1 for Semantic Scene Segmentation for Robotics Applications
Figure 2 for Semantic Scene Segmentation for Robotics Applications
Figure 3 for Semantic Scene Segmentation for Robotics Applications
Figure 4 for Semantic Scene Segmentation for Robotics Applications

Semantic scene segmentation plays a critical role in a wide range of robotics applications, e.g., autonomous navigation. These applications are accompanied by specific computational restrictions, e.g., operation on low-power GPUs, at sufficient speed, and also for high-resolution input. Existing state-of-the-art segmentation models provide evaluation results under different setups and mainly considering high-power GPUs. In this paper, we investigate the behavior of the most successful semantic scene segmentation models, in terms of deployment (inference) speed, under various setups (GPUs, input sizes, etc.) in the context of robotics applications. The target of this work is to provide a comparative study of current state-of-the-art segmentation models so as to select the most compliant with the robotics applications requirements.

* Accepted at IISA 2021 
Viaarxiv icon

Heterogeneous Knowledge Distillation using Information Flow Modeling

May 02, 2020
Nikolaos Passalis, Maria Tzelepi, Anastasios Tefas

Figure 1 for Heterogeneous Knowledge Distillation using Information Flow Modeling
Figure 2 for Heterogeneous Knowledge Distillation using Information Flow Modeling
Figure 3 for Heterogeneous Knowledge Distillation using Information Flow Modeling
Figure 4 for Heterogeneous Knowledge Distillation using Information Flow Modeling

Knowledge Distillation (KD) methods are capable of transferring the knowledge encoded in a large and complex teacher into a smaller and faster student. Early methods were usually limited to transferring the knowledge only between the last layers of the networks, while latter approaches were capable of performing multi-layer KD, further increasing the accuracy of the student. However, despite their improved performance, these methods still suffer from several limitations that restrict both their efficiency and flexibility. First, existing KD methods typically ignore that neural networks undergo through different learning phases during the training process, which often requires different types of supervision for each one. Furthermore, existing multi-layer KD methods are usually unable to effectively handle networks with significantly different architectures (heterogeneous KD). In this paper we propose a novel KD method that works by modeling the information flow through the various layers of the teacher model and then train a student model to mimic this information flow. The proposed method is capable of overcoming the aforementioned limitations by using an appropriate supervision scheme during the different phases of the training process, as well as by designing and training an appropriate auxiliary teacher model that acts as a proxy model capable of "explaining" the way the teacher works to the student. The effectiveness of the proposed method is demonstrated using four image datasets and several different evaluation setups.

* Accepted at CVPR 2020 
Viaarxiv icon