Nowadays, clinical research routinely uses omics data, such as gene expression, for predicting clinical outcomes or selecting markers. Additionally, so-called co-data are often available, providing complementary information on the covariates, like p-values from previously published studies or groups of genes corresponding to pathways. Elastic net penalisation is widely used for prediction and covariate selection. Group-adaptive elastic net penalisation learns from co-data to improve the prediction and covariate selection, by penalising important groups of covariates less than other groups. Existing methods are, however, computationally expensive. Here we present a fast method for marginal likelihood estimation of group-adaptive elastic net penalties for generalised linear models. We first derive a low-dimensional representation of the Taylor approximation of the marginal likelihood and its first derivative for group-adaptive ridge penalties, to efficiently estimate these penalties. Then we show by using asymptotic normality of the linear predictors that the marginal likelihood for elastic net models may be approximated well by the marginal likelihood for ridge models. The ridge group penalties are then transformed to elastic net group penalties by using the variance function. The method allows for overlapping groups and unpenalised variables. We demonstrate the method in a model-based simulation study and an application to cancer genomics. The method substantially decreases computation time and outperforms or matches other methods by learning from co-data.
Movement control of artificial limbs has made big advances in recent years. New sensor and control technology enhanced the functionality and usefulness of artificial limbs to the point that complex movements, such as grasping, can be performed to a limited extent. To date, the most successful results were achieved by applying recurrent neural networks (RNNs). However, in the domain of artificial hands, experiments so far were limited to non-mobile wrists, which significantly reduces the functionality of such prostheses. In this paper, for the first time, we present empirical results on gesture recognition with both mobile and non-mobile wrists. Furthermore, we demonstrate that recurrent neural networks with simple recurrent units (SRU) outperform regular RNNs in both cases in terms of gesture recognition accuracy, on data acquired by an arm band sensing electromagnetic signals from arm muscles (via surface electromyography or sEMG). Finally, we show that adding domain adaptation techniques to continuous gesture recognition with RNN improves the transfer ability between subjects, where a limb controller trained on data from one person is used for another person.
In recent years, multi-dimensional online decision making has been playing a crucial role in many practical applications such as online recommendation and digital marketing. To solve it, we introduce stochastic low-rank tensor bandits, a class of bandits whose mean rewards can be represented as a low-rank tensor. We propose two learning algorithms, tensor epoch-greedy and tensor elimination, and develop finite-time regret bounds for them. We observe that tensor elimination has an optimal dependency on the time horizon, while tensor epoch-greedy has a sharper dependency on tensor dimensions. Numerical experiments further back up these theoretical findings and show that our algorithms outperform various state-of-the-art approaches that ignore the tensor low-rank structure.
A mega-constellation of low-altitude earth orbit (LEO) satellites (SATs) and burgeoning unmanned aerial vehicles (UAVs) are promising enablers for high-speed and long-distance communications in beyond fifth-generation (5G) systems. Integrating SATs and UAVs within a non-terrestrial network (NTN), in this article we investigate the problem of forwarding packets between two faraway ground terminals through SAT and UAV relays using either millimeter-wave (mmWave) radio-frequency (RF) or free-space optical (FSO) link. Towards maximizing the communication efficiency, the real-time associations with orbiting SATs and the moving trajectories of UAVs should be optimized with suitable FSO/RF links, which is challenging due to the time-varying network topology and a huge number of possible control actions. To overcome the difficulty, we lift this problem to multi-agent deep reinforcement learning (MARL) with a novel action dimensionality reduction technique. Simulation results corroborate that our proposed SAT-UAV integrated scheme achieves 1.99x higher end-to-end sum throughput compared to a benchmark scheme with fixed ground relays. While improving the throughput, our proposed scheme also aims to reduce the UAV control energy, yielding 2.25x higher energy efficiency than a baseline method only maximizing the throughput. Lastly, thanks to utilizing hybrid FSO/RF links, the proposed scheme achieves up to 62.56x higher peak throughput and 21.09x higher worst-case throughput than the cases utilizing either RF or FSO links, highlighting the importance of co-designing SAT-UAV associations, UAV trajectories, and hybrid FSO/RF links in beyond-5G NTNs.
For quadrotor trajectory planning, describing a polynomial trajectory through coefficients and end-derivatives both enjoy their own convenience in energy minimization. We name them double descriptions of polynomial trajectories. The transformation between them, causing most of the inefficiency and instability, is formally analyzed in this paper. Leveraging its analytic structure, we design a linear-complexity scheme for both jerk/snap minimization and parameter gradient evaluation, which possesses efficiency, stability, flexibility, and scalability. With the help of our scheme, generating an energy optimal (minimum snap) trajectory only costs 1 $\mu s$ per piece at the scale up to 1,000,000 pieces. Moreover, generating large-scale energy-time optimal trajectories is also accelerated by an order of magnitude against conventional methods.
In tasks such as surveying or monitoring remote regions, an autonomous robot must move while transmitting data over a wireless network with unknown, position-dependent transmission rates. For such a robot, this paper considers the problem of transmitting a data buffer in minimum time, while possibly also navigating towards a goal position. Two approaches are proposed, each consisting of a machine-learning component that estimates the rate function from samples; and of an optimal-control component that moves the robot given the current rate function estimate. Simple obstacle avoidance is performed for the case without a goal position. In extensive simulations, these methods achieve competitive performance compared to known-rate and unknown-rate baselines. A real indoor experiment is provided in which a Parrot AR.Drone 2 successfully learns to transmit the buffer.
We have previously shown how to socially integrate a fish robot into a group of zebrafish thanks to biomimetic behavioural models. The models have to be calibrated on experimental data to present correct behavioural features. This calibration is essential to enhance the social integration of the robot into the group. When calibrated, the behavioural model of fish behaviour is implemented to drive a robot with closed-loop control of social interactions into a group of zebrafish. This approach can be useful to form mixed-groups, and study animal individual and collective behaviour by using biomimetic autonomous robots capable of responding to the animals in long-standing experiments. Here, we show a methodology for continuous real-time calibration and refinement of multi-level behavioural model. The real-time calibration, by an evolutionary algorithm, is based on simulation of the model to correspond to the observed fish behaviour in real-time. The calibrated model is updated on the robot and tested during the experiments. This method allows to cope with changes of dynamics in fish behaviour. Moreover, each fish presents individual behavioural differences. Thus, each trial is done with naive fish groups that display behavioural variability. This real-time calibration methodology can optimise the robot behaviours during the experiments. Our implementation of this methodology runs on three different computers that perform individual tracking, data-analysis, multi-objective evolutionary algorithms, simulation of the fish robot and adaptation of the robot behavioural models, all in real-time.
Autonomous car racing raises fundamental robotics challenges such as planning minimum-time trajectories under uncertain dynamics and controlling the car at its friction limits. In this project, we consider the task of autonomous car racing in the top-selling car racing game Gran Turismo Sport. Gran Turismo Sport is known for its detailed physics simulation of various cars and tracks. Our approach makes use of maximum-entropy deep reinforcement learning and a new reward design to train a sensorimotor policy to complete a given race track as fast as possible. We evaluate our approach in three different time trial settings with different cars and tracks. Our results show that the obtained controllers not only beat the built-in non-player character of Gran Turismo Sport, but also outperform the fastest known times in a dataset of personal best lap times of over 50,000 human drivers.
We introduce a novel approach for scanned document representation to perform field extraction. It allows the simultaneous encoding of the textual, visual and layout information in a 3D matrix used as an input to a segmentation model. We improve the recent Chargrid and Wordgrid models in several ways, first by taking into account the visual modality, then by boosting its robustness in regards to small datasets while keeping the inference time low. Our approach is tested on public and private document-image datasets, showing higher performances compared to the recent state-of-the-art methods.
The fourth-generation Wireless Technology (4G) has been adopted by all major operators in the world and has already ruled the cellular landscape for around a decade. A lot of researches and new technologies are being considered as potential elements contributing to the next generation wireless communication (5G). The lack of realistic and flexible experimentation platforms for collecting real communication data has limited and slowed the landing of new approaches. Software Defined Radio (SDR) can provide flexible, upgradable, and long lifetime radio equipment for the wireless communications infrastructure, which can also provide more flexible and possibly cheaper multi-standard-terminals for end users. By altering the open-source code individually, we can freely perform the real value measurement. This paper provides a real Long Term Evolution (LTE) channel measurement method based on the OpenAirInterface (OAI) for the evaluation of the channel prediction algorithm. Firstly, the experimentation platform will be established by using OAI, Universal Software Radio Peripheral (USRP), and commercial User Equipment (UE). Then, some source codes of OAI are analyzed and changed, so that the real-time over-the-air channel measurement can be achieved. The results from the measurement are then trained and tested on the channel prediction algorithm. The results of the test illustrate that the implemented channel measurement method can meet the need for algorithms' verification and can be further extended for more development of algorithms.