Hypertension is commonly referred to as the "silent killer", since it can lead to severe health complications without any visible symptoms. Early detection of hypertension is crucial in preventing significant health issues. Although some studies suggest a relationship between blood pressure and certain vital signals, such as Photoplethysmogram (PPG), reliable generalization of the proposed blood pressure estimation methods is not yet guaranteed. This lack of certainty has resulted in some studies doubting the existence of such relationships, or considering them weak and limited to heart rate and blood pressure. In this paper, a high-dimensional representation technique based on random convolution kernels is proposed for hypertension detection using PPG signals. The results show that this relationship extends beyond heart rate and blood pressure, demonstrating the feasibility of hypertension detection with generalization. Additionally, the utilized transform using convolution kernels, as an end-to-end time-series feature extractor, outperforms the methods proposed in the previous studies and state-of-the-art deep learning models.
Reconfigurable intelligent surfaces (RISs) are expected to be a main component of future 6G networks, due to their capability to create a controllable wireless environment, and achieve extended coverage and improved localization accuracy. In this paper, we present a novel cooperative positioning use case of the RIS in mmWave frequencies, and show that in the presence of RIS, together with sidelink communications, localization with zero access points (APs) is possible. We show that multiple (at least three) half-duplex single-antenna user equipments (UEs) can cooperatively estimate their positions through device-to-device communications with a single RIS as an anchor without the need for any APs. We start by formulating a three-dimensional positioning problem with Cram\'er-Rao lower bound (CRLB) derived for performance analysis. After that, we discuss the RIS profile design and the power allocation strategy between the UEs. Then, we propose low-complexity estimators for estimating the channel parameters and UEs' positions. Finally, we evaluate the performance of the proposed estimators and RIS profiles in the considered scenario via extensive simulations and show that sub-meter level positioning accuracy can be achieved under multi-path propagation.
A smart city involves, among other elements, intelligent transportation, crowd monitoring, and digital twins, each of which requires information exchange via wireless communication links and localization of connected devices and passive objects (including people). Although localization and sensing (L&S) are envisioned as core functions of future communication systems, they have inherently different demands in terms of infrastructure compared to communications. Wireless communications generally requires a connection to only a single access point (AP), while L&S demand simultaneous line-of-sight propagation paths to several APs, which serve as location and orientation anchors. Hence, a smart city deployment optimized for communication will be insufficient to meet stringent L&S requirements. In this article, we argue that the emerging technologies of reconfigurable intelligent surfaces (RISs) and sidelink communications constitute the key to providing ubiquitous coverage for L&S in smart cities with low-cost and energy-efficient technical solutions. To this end, we propose and evaluate AP-coordinated and self-coordinated RIS-enabled L&S architectures and detail three groups of application scenarios, relying on low-complexity beacons, cooperative localization, and full-duplex transceivers. A list of practical issues and consequent open research challenges of the proposed L&S systems is also provided.
Random convolution kernel transform (Rocket) is a fast, efficient, and novel approach for time series feature extraction, using a large number of randomly initialized convolution kernels, and classification of the represented features with a linear classifier, without training the kernels. Since these kernels are generated randomly, a portion of these kernels may not positively contribute in performance of the model. Hence, selection of the most important kernels and pruning the redundant and less important ones is necessary to reduce computational complexity and accelerate inference of Rocket. Selection of these kernels is a combinatorial optimization problem. In this paper, the kernels selection process is modeled as an optimization problem and a population-based approach is proposed for selecting the most important kernels. This approach is evaluated on the standard time series datasets and the results show that on average it can achieve a similar performance to the original models by pruning more than 60% of kernels. In some cases, it can achieve a similar performance using only 1% of the kernels.
This paper proposes a cooperative angle-of-arrival(AoA) estimation, taking advantage of co-processing channel state information (CSI) from a group of access points that receive signals of the same source. Since received signals are sparse, we use Compressive Sensing (CS) to address the AoA estimation problem. We formulate this problem as a penalized l0-norm minimization, reformulate it as an Ising energy problem, and solve it using Markov Chain Monte Carlo (MCMC). Simulation results show that our proposed method outperforms the existing methods in the literature.
A typical deep neural network (DNN) has a large number of trainable parameters. Choosing a network with proper capacity is challenging and generally a larger network with excessive capacity is trained. Pruning is an established approach to reducing the number of parameters in a DNN. In this paper, we propose a framework for pruning DNNs based on a population-based global optimization method. This framework can use any pruning objective function. As a case study, we propose a simple but efficient objective function based on the concept of energy-based models. Our experiments on ResNets, AlexNet, and SqueezeNet for the CIFAR-10 and CIFAR-100 datasets show a pruning rate of more than $50\%$ of the trainable parameters with approximately $<5\%$ and $<1\%$ drop of Top-1 and Top-5 classification accuracy, respectively.
Pruning is one of the major methods to compress deep neural networks. In this paper, we propose an Ising energy model within an optimization framework for pruning convolutional kernels and hidden units. This model is designed to reduce redundancy between weight kernels and detect inactive kernels/hidden units. Our experiments using ResNets, AlexNet, and SqueezeNet on CIFAR-10 and CIFAR-100 datasets show that the proposed method on average can achieve a pruning rate of more than $50\%$ of the trainable parameters with approximately $<10\%$ and $<5\%$ drop of Top-1 and Top-5 classification accuracy, respectively.
Neural network pruning is an important technique for creating efficient machine learning models that can run on edge devices. We propose a new, highly flexible approach to neural network pruning based on Gibbs distributions. We apply it with Hamiltonians that are based on weight magnitude, using the annealing capabilities of Gibbs distributions to smoothly move from regularization to adaptive pruning during an ordinary neural network training schedule. This method can be used for either unstructured or structured pruning, and we provide explicit formulations for both. We compare our proposed method to several established pruning methods on ResNet variants and find that it outperforms them for unstructured, kernel-wise, and filter-wise pruning.
Dropout is well-known as an effective regularization method by sampling a sub-network from a larger deep neural network and training different sub-networks on different subsets of the data. Inspired by the concept of dropout, we stochastically select, train, and evolve a population of sub-networks, where each sub-network is represented by a state vector and a scalar energy. The proposed energy-based dropout (EDropout) method provides a unified framework that can be applied on any arbitrary neural network without the need for proper normalization. The concept of energy in EDropout has the capability of handling diverse number of constraints without any limit on the size or length of the state vectors. The selected set of sub-networks converges during the training to a sub-network that minimizes the energy of the candidate state vectors. The rest of training time is then allocated to fine-tuning the selected sub-network. This process will be equivalent to pruning. We evaluate the proposed method on different flavours of ResNets, AlexNet, and SqueezeNet on the Kuzushiji, Fashion, CIFAR-10, CIFAR-100, and Flowers datasets, and compare with the state-of-the-art pruning and compression methods. We show that on average the networks trained with EDropout achieve a pruning rate of more than 50% of the trainable parameters with approximately <5% and <1% drop of Top-1 and Top-5 classification accuracy, respectively.
Dropout and similar stochastic neural network regularization methods are often interpreted as implicitly averaging over a large ensemble of models. We propose STE (stochastically trained ensemble) layers, which enhance the averaging properties of such methods by training an ensemble of weight matrices with stochastic regularization while explicitly averaging outputs. This provides stronger regularization with no additional computational cost at test time. We show consistent improvement on various image classification tasks using standard network topologies.