Motion sensors integrated into wearable and mobile devices provide valuable information about the device users. Machine learning and, recently, deep learning techniques have been used to characterize sensor data. Mostly, a single task, such as recognition of activities, is targeted, and the data is processed centrally at a server or in a cloud environment. However, the same sensor data can be utilized for multiple tasks and distributed machine-learning techniques can be used without the requirement of the transmission of data to a centre. This paper explores Federated Transfer Learning in a Multi-Task manner for both sensor-based human activity recognition and device position identification tasks. The OpenHAR framework is used to train the models, which contains ten smaller datasets. The aim is to obtain model(s) applicable for both tasks in different datasets, which may include only some label types. Multiple experiments are carried in the Flower federated learning environment using the DeepConvLSTM architecture. Results are presented for federated and centralized versions under different parameters and restrictions. By utilizing transfer learning and training a task-specific and personalized federated model, we obtained a similar accuracy with training each client individually and higher accuracy than a fully centralized approach.
The ability to monitor ambient characteristics, interact with them, and derive information about the surroundings has been made possible by the rapid proliferation of edge sensing devices like IoT, mobile, and wearable devices and their measuring capabilities with integrated sensors. Even though these devices are small and have less capacity for data storage and processing, they produce vast amounts of data. Some example application areas where sensor data is collected and processed include healthcare, environmental (including air quality and pollution levels), automotive, industrial, aerospace, and agricultural applications. These enormous volumes of sensing data collected from the edge devices are analyzed using a variety of Machine Learning (ML) and Deep Learning (DL) approaches. However, analyzing them on the cloud or a server presents challenges related to privacy, hardware, and connectivity limitations. Federated Learning (FL) is emerging as a solution to these problems while preserving privacy by jointly training a model without sharing raw data. In this paper, we review the FL strategies from the perspective of edge sensing devices to get over the limitations of conventional machine learning techniques. We focus on the key FL principles, software frameworks, and testbeds. We also explore the current sensor technologies, properties of the sensing devices and sensing applications where FL is utilized. We conclude with a discussion on open issues and future research directions on FL for further studies
Sleep is among the most important factors affecting one's daily performance, well-being, and life quality. Nevertheless, it became possible to measure it in daily life in an unobtrusive manner with wearable devices. Rather than camera recordings and extraction of the state from the images, wrist-worn devices can measure directly via accelerometer, heart rate, and heart rate variability sensors. Some measured features can be as follows: time to bed, time out of bed, bedtime duration, minutes to fall asleep, and minutes after wake-up. There are several studies in the literature regarding sleep quality and stage prediction. However, they use only wearable data to predict or focus on the sleep stage. In this study, we use the NetHealth dataset, which is collected from 698 college students' via wearables, as well as surveys. Recently, there has been an advancement in deep learning algorithms, and they generally perform better than conventional machine learning techniques. Among them, Convolutional Neural Networks (CNN) have high performances. Thus, in this study, we apply different CNN architectures that have already performed well in the human activity recognition domain and compare their results. We also apply Random Forest (RF) since it performs best among the conventional methods. In future studies, we will compare them with other deep learning algorithms.
With the development of wearable technologies, a new kind of healthcare data has become valuable as medical information. These data provide meaningful information regarding an individual's physiological and psychological states, such as activity level, mood, stress, and cognitive health. These biomarkers are named digital since they are collected from digital devices integrated with various sensors. In this study, we explore digital biomarkers related to stress modality by examining data collected from mobile phones and smartwatches. We utilize machine learning techniques on the Tesserae dataset, precisely Random Forest, to extract stress biomarkers. Using feature selection techniques, we utilize weather, activity, heart rate (HR), stress, sleep, and location (work-home) measurements from wearables to determine the most important stress-related biomarkers. We believe we contribute to interpreting stress biomarkers with a high range of features from different devices. In addition, we classify the $5$ different stress levels with the most important features, and our results show that we can achieve $85\%$ overall class accuracy by adjusting class imbalance and adding extra features related to personality characteristics. We perform similar and even better results in recognizing stress states with digital biomarkers in a daily-life scenario targeting a higher number of classes compared to the related studies.