Deep Learning (DL) methods have dramatically increased in popularity in recent years, with significant growth in their application to supervised learning problems in the biomedical sciences. However, the greater prevalence and complexity of missing data in modern biomedical datasets present significant challenges for DL methods. Here, we provide a formal treatment of missing data in the context of deeply learned generalized linear models, a supervised DL architecture for regression and classification problems. We propose a new architecture, \textit{dlglm}, that is one of the first to be able to flexibly account for both ignorable and non-ignorable patterns of missingness in input features and response at training time. We demonstrate through statistical simulation that our method outperforms existing approaches for supervised learning tasks in the presence of missing not at random (MNAR) missingness. We conclude with a case study of a Bank Marketing dataset from the UCI Machine Learning Repository, in which we predict whether clients subscribed to a product based on phone survey data.
Modeling continuous dynamical systems from discretely sampled observations is a fundamental problem in data science. Often, such dynamics are the result of non-local processes that present an integral over time. As such, these systems are modeled with Integro-Differential Equations (IDEs); generalizations of differential equations that comprise both an integral and a differential component. For example, brain dynamics are not accurately modeled by differential equations since their behavior is non-Markovian, i.e. dynamics are in part dictated by history. Here, we introduce the Neural IDE (NIDE), a framework that models ordinary and integral components of IDEs using neural networks. We test NIDE on several toy and brain activity datasets and demonstrate that NIDE outperforms other models, including Neural ODE. These tasks include time extrapolation as well as predicting dynamics from unseen initial conditions, which we test on whole-cortex activity recordings in freely behaving mice. Further, we show that NIDE can decompose dynamics into its Markovian and non-Markovian constituents, via the learned integral operator, which we test on fMRI brain activity recordings of people on ketamine. Finally, the integrand of the integral operator provides a latent space that gives insight into the underlying dynamics, which we demonstrate on wide-field brain imaging recordings. Altogether, NIDE is a novel approach that enables modeling of complex non-local dynamics with neural networks.
Socio-economic characteristics are influencing the temporal and spatial variability of water demand - the biggest source of uncertainties within water distribution system modeling. Improving our knowledge on these influences can be utilized to decrease demand uncertainties. This paper aims to link smart water meter data to socio-economic user characteristics by applying a novel clustering algorithm that uses a dynamic time warping metric on daily demand patterns. The approach is tested on simulated and measured single family home datasets. We show that the novel algorithm performs better compared to commonly used clustering methods, both, in finding the right number of clusters as well as assigning patterns correctly. Additionally, the methodology can be used to identify outliers within clusters of demand patterns. Furthermore, this study investigates which socio-economic characteristics (e.g. employment status, number of residents) are prevalent within single clusters and, consequently, can be linked to the shape of the cluster's barycenters. In future, the proposed methods in combination with stochastic demand models can be used to fill data-gaps in hydraulic models.
This work develops LoneSTAR, a novel enabler of full-duplex millimeter wave (mmWave) communication systems through the design of analog beamforming codebooks. LoneSTAR codebooks deliver high beamforming gain and broad coverage while simultaneously reducing the self-interference coupled by transmit and receive beams at a full-duplex mmWave transceiver. Our design framework accomplishes this by tolerating some variability in transmit and receive beamforming gain to strategically shape beams that reject self-interference spatially while accounting for digitally-controlled analog beamforming networks and self-interference channel estimation error. By leveraging the coherence time of the self-interference channel, a mmWave system can use the same LoneSTAR design over many time slots to serve several downlink-uplink user pairs in a full-duplex fashion without the need for additional self-interference cancellation. Compared to those using conventional codebooks, full-duplex mmWave systems employing LoneSTAR codebooks can mitigate higher levels of self-interference, tolerate more cross-link interference, and demand lower SNRs in order to outperform half-duplex operation -- all while supporting beam alignment. This makes LoneSTAR a potential standalone solution for enabling simultaneous transmission and reception in mmWave systems, from which it derives its name.
This paper demonstrates the potentials of the long short-term memory (LSTM) when applyingwith macroeconomic time series data sampled at different frequencies. We first present how theconventional LSTM model can be adapted to the time series observed at mixed frequencies when thesame mismatch ratio is applied for all pairs of low-frequency output and higher-frequency variable. Togeneralize the LSTM to the case of multiple mismatch ratios, we adopt the unrestricted Mixed DAtaSampling (U-MIDAS) scheme (Foroni et al., 2015) into the LSTM architecture. We assess via bothMonte Carlo simulations and empirical application the out-of-sample predictive performance. Ourproposed models outperform the restricted MIDAS model even in a set up favorable to the MIDASestimator. For real world application, we study forecasting a quarterly growth rate of Thai realGDP using a vast array of macroeconomic indicators both quarterly and monthly. Our LSTM withU-MIDAS scheme easily beats the simple benchmark AR(1) model at all horizons, but outperformsthe strong benchmark univariate LSTM only at one and six months ahead. Nonetheless, we find thatour proposed model could be very helpful in the period of large economic downturns for short-termforecast. Simulation and empirical results seem to support the use of our proposed LSTM withU-MIDAS scheme to nowcasting application.
State-of-the-art polarimeter calibration is reviewed. Producing many quasi-random polarization states and moving/bending a fiber without changing power allows finding a polarimeter calibration where the degree-of-polarization reaches unity and parasitic polarization-dependent loss is small. Using a polarization scrambler/transformer and a polarimeter a device-under-test can be characterized. Its Mueller matrix can be decomposed into a product of a nondepolarizing Mueller-Jones matrix times a purely depolarizing Mueller matrix. Test polarizations may drift over time. With help of an optical switch the reference device can be measured against an internal reference path. Later, with possibly different test polarizations, the actual device-under-test is measured against the internal reference. Polarization drift and need for repeated reference device measurement are thus overcome. When a patchcord is inserted, connector PDL can be measured, provided that errors are calibrated away, again by fiber moving/bending. Experimentally we have measured PDL with errors <0.004 dB. This easily suffices to measure connector PDL, which is demonstrated. PDL >60 dB was measured when the device under test was a good polarizer. A 20 Mrad/s polarization scrambler with LiNbO3 device generates the test polarizations. The polarimeter can sample at 100 MHz and can store 64M Stokes vectors. During laser frequency scans Mueller matrices can be measured in time intervals as short as 5 us.
The main contribution of this paper is the proof of the convexity of the omni-directional tethered robot workspace (namely, the set of all tether-length-admissible robot configurations), as well as a set of distance-optimal tethered path planning algorithms that leverage the workspace convexity. The workspace is proven to be topologically a simply-connected subset and geometrically a convex subset of the set of all configurations. As a direct result, the tether-length-admissible optimal path between two configurations is proven exactly the untethered collision-free locally shortest path in the homotopy specified by the concatenation of the tether curve of the given configurations, which can be simply constructed by performing an untethered path shortening process in the 2D environment instead of a path searching process in the pre-calculated workspace. The convexity is an intrinsic property to the tethered robot kinematics, thus has universal impacts on all high-level distance-optimal tethered path planning tasks: The most time-consuming workspace pre-calculation (WP) process is replaced with a goal configuration pre-calculation (GCP) process, and the homotopy-aware path searching process is replaced with untethered path shortening processes. Motivated by the workspace convexity, efficient algorithms to solve the following problems are naturally proposed: (a) The optimal tethered reconfiguration (TR) planning problem is solved by a locally untethered path shortening (UPS) process, (b) The classic optimal tethered path (TP) planning problem (from a starting configuration to a goal location whereby the target tether state is not assigned) is solved by a GCP process and $n$ UPS processes, where $n$ is the number of tether-length-admissible configurations that visit the goal location, (c) The optimal tethered motion to visit a sequence of multiple goal locations, referred to as
Recently, the applications of the methodologies of Reinforcement Learning (RL) to NP-Hard Combinatorial optimization problems have become a popular topic. This is essentially due to the nature of the traditional combinatorial algorithms, often based on a trial-and-error process. RL aims at automating this process. At this regard, this paper focuses on the application of RL for the Vehicle Routing Problem (VRP), a famous combinatorial problem that belongs to the class of NP-Hard problems. In this work, first, the problem is modeled as a Markov Decision Process (MDP) and then the PPO method (which belongs to the Actor-Critic class of Reinforcement learning methods) is applied. In a second phase, the neural architecture behind the Actor and Critic has been established, choosing to adopt a neural architecture based on the Convolutional neural networks, both for the Actor and the Critic. This choice resulted in effectively addressing problems of different sizes. Experiments performed on a wide range of instances show that the algorithm has good generalization capabilities and can reach good solutions in a short time. Comparisons between the algorithm proposed and the state-of-the-art solver OR-TOOLS show that the latter still outperforms the Reinforcement learning algorithm. However, there are future research perspectives, that aim to upgrade the current performance of the algorithm proposed.
Short-term plasticity (STP) is a mechanism that stores decaying memories in synapses of the cerebral cortex. In computing practice, STP has been used, but mostly in the niche of spiking neurons, even though theory predicts that it is the optimal solution to certain dynamic tasks. Here we present a new type of recurrent neural unit, the STP Neuron (STPN), which indeed turns out strikingly powerful. Its key mechanism is that synapses have a state, propagated through time by a self-recurrent connection-within-the-synapse. This formulation enables training the plasticity with backpropagation through time, resulting in a form of learning to learn and forget in the short term. The STPN outperforms all tested alternatives, i.e. RNNs, LSTMs, other models with fast weights, and differentiable plasticity. We confirm this in both supervised and reinforcement learning (RL), and in tasks such as Associative Retrieval, Maze Exploration, Atari video games, and MuJoCo robotics. Moreover, we calculate that, in neuromorphic or biological circuits, the STPN minimizes energy consumption across models, as it depresses individual synapses dynamically. Based on these, biological STP may have been a strong evolutionary attractor that maximizes both efficiency and computational power. The STPN now brings these neuromorphic advantages also to a broad spectrum of machine learning practice. Code is available at https://github.com/NeuromorphicComputing/stpn
In this paper, we deal with a general distributed constrained online learning problem with privacy over time-varying networks, where a class of nondecomposable objective functions are considered. Under this setting, each node only controls a part of the global decision variable, and the goal of all nodes is to collaboratively minimize the global objective over a time horizon $T$ while guarantees the security of the transmitted information. For such problems, we first design a novel generic algorithm framework, named as DPSDA, of differentially private distributed online learning using the Laplace mechanism and the stochastic variants of dual averaging method. Then, we propose two algorithms, named as DPSDA-C and DPSDA-PS, under this framework. Theoretical results show that both algorithms attain an expected regret upper bound in $\mathcal{O}( \sqrt{T} )$ when the objective function is convex, which matches the best utility achievable by cutting-edge algorithms. Finally, numerical experiment results on both real-world and randomly generated datasets verify the effectiveness of our algorithms.