This letter suggests an integrated approach for a drone (or multirotor) to perform an autonomous videography task in a 3-D obstacle environment by following a moving object. The proposed system includes 1) a target motion prediction module which can be applied to dense environments and 2) a hierarchical chasing planner based on a proposed metric for visibility. In the prediction module, we minimize observation error given that the target object itself does not collide with obstacles. The estimated future trajectory of target is obtained by covariant optimization. The other module, chasing planner, is in a bi-level structure composed of preplanner and smooth planner. In the first phase, we leverage a graph-search method to preplan a chasing corridor which incorporates safety and visibility of target during a time window. In the subsequent phase, we generate a smooth and dynamically feasible path within the corridor using quadratic programming (QP). We validate our approach with multiple complex scenarios and actual experiments. The source code can be found in https://github.com/icsl-Jeon/traj_gen_vis
Vertical Cavity Surface Emitting Lasers (VCSELs) have demonstrated suitability for data transmission in indoor optical wireless communication (OWC) systems due to the high modulation bandwidth and low manufacturing cost of these sources. Specifically, resource allocation is one of the major challenges that can affect the performance of multi-user optical wireless systems. In this paper, an optimisation problem is formulated to optimally assign each user to an optical access point (AP) composed of multiple VCSELs within a VCSEL array at a certain time to maximise the signal to interference plus noise ratio (SINR). In this context, a mixed-integer linear programming (MILP) model is introduced to solve this optimisation problem. Despite the optimality of the MILP model, it is considered impractical due to its high complexity, high memory and full system information requirements. Therefore, reinforcement Learning (RL) is considered, which recently has been widely investigated as a practical solution for various optimization problems in cellular networks due to its ability to interact with environments with no previous experience. In particular, a Q-learning (QL) algorithm is investigated to perform resource management in a steerable VCSEL-based OWC systems. The results demonstrate the ability of the QL algorithm to achieve optimal solutions close to the MILP model. Moreover, the adoption of beam steering, using holograms implemented by exploiting liquid crystal devices, results in further enhancement in the performance of the network considered.
Robust physics discovery is of great interest for many scientific and engineering fields. Inspired by the principle that a representative model is the one simplest possible, a new model selection criteria considering both model's Parsimony and Sparsity is proposed. A Parsimony Enhanced Sparse Bayesian Learning (PeSBL) method is developed for discovering the governing Partial Differential Equations (PDEs) of nonlinear dynamical systems. Compared with the conventional Sparse Bayesian Learning (SBL) method, the PeSBL method promotes parsimony of the learned model in addition to its sparsity. In this method, the parsimony of model terms is evaluated using their locations in the prescribed candidate library, for the first time, considering the increased complexity with the power of polynomials and the order of spatial derivatives. Subsequently, the model parameters are updated through Bayesian inference with the raw data. This procedure aims to reduce the error associated with the possible loss of information in data preprocessing and numerical differentiation prior to sparse regression. Results of numerical case studies indicate that the governing PDEs of many canonical dynamical systems can be correctly identified using the proposed PeSBL method from highly noisy data (up to 50% in the current study). Next, the proposed methodology is extended for stochastic PDE learning where all parameters and modeling error are considered as random variables. Hierarchical Bayesian Inference (HBI) is integrated with the proposed framework for stochastic PDE learning from a population of observations. Finally, the proposed PeSBL is demonstrated for system response prediction with uncertainties and anomaly diagnosis. Codes of all demonstrated examples in this study are available on the website: https://github.com/ymlasu.
We propose PIGLeT: a model that learns physical commonsense knowledge through interaction, and then uses this knowledge to ground language. We factorize PIGLeT into a physical dynamics model, and a separate language model. Our dynamics model learns not just what objects are but also what they do: glass cups break when thrown, plastic ones don't. We then use it as the interface to our language model, giving us a unified model of linguistic form and grounded meaning. PIGLeT can read a sentence, simulate neurally what might happen next, and then communicate that result through a literal symbolic representation, or natural language. Experimental results show that our model effectively learns world dynamics, along with how to communicate them. It is able to correctly forecast "what happens next" given an English sentence over 80% of the time, outperforming a 100x larger, text-to-text approach by over 10%. Likewise, its natural language summaries of physical interactions are also judged by humans as more accurate than LM alternatives. We present comprehensive analysis showing room for future work.
Electric vehicles are becoming more popular all over the world. With increasing battery capacities and a growing fast-charging infrastructure, they are becoming suitable for long distance travel. However, queues at charging stations could lead to long waiting times, making efficient route planning even more important. In general, optimal multi-objective route planning is extremely computationally expensive. We propose an adaptive charging and routing strategy, which considers driving, waiting, and charging time. For this, we developed a multi-criterion shortest-path search algorithm using contraction hierarchies. To further reduce the computational effort, we precompute shortest-path trees between the known locations of the charging stations. We propose a central charging station database (CSDB) that helps estimating waiting times at charging stations ahead of time. This enables our adaptive charging and routing strategy to reduce these waiting times. In an extensive set of simulation experiments, we demonstrate the advantages of our concept, which reduces average waiting times at charging stations by up to 97 %. Even if only a subset of the cars uses the CSDB approach, we can substantially reduce waiting times and thereby the total travel time of electric vehicles.
Reconfigurable intelligent surface (RIS)-empowered communications is on the rise and is a promising technology envisioned to aid in 6G and beyond wireless communication networks. RISs can manipulate impinging waves through their electromagnetic elements enabling some sort of a control over the wireless channel. In this paper, the potential of RIS technology is explored to perform equalization over-the-air for frequency-selective channels whereas, equalization is generally conducted at either the transmitter or receiver in conventional communication systems. Specifically, with the aid of an RIS, the frequency-selective channel from the transmitter to the RIS is transformed to a frequency-flat channel through elimination of inter-symbol interference (ISI) components at the receiver. ISI is eliminated by adjusting the phases of impinging signals particularly to maximize the incoming signal of the strongest tap. First, a general end-to-end system model is provided and a continuous to discrete-time signal model is presented. Subsequently, a probabilistic analysis for the elimination of ISI terms is conducted and reinforced with computer simulations. Furthermore, a theoretical error probability analysis is performed along with computer simulations. It is demonstrated that with the proposed method, ISI can successfully be eliminated and the RIS-aided communication channel can be converted from frequency-selective to frequency-flat.
We investigate resource allocation scheme to reduce the energy consumption of federated learning (FL) in the integrated fog-cloud computing enabled Internet-of-things (IoT) networks. In the envisioned system, IoT devices are connected with the centralized cloud server (CS) via multiple fog access points (F-APs). We consider two different scenarios for training the local models. In the first scenario, local models are trained at the IoT devices and the F-APs upload the local model parameters to the CS. In the second scenario, local models are trained at the F-APs based on the collected data from the IoT devices and the F-APs collaborate with the CS for updating the model parameters. Our objective is to minimize the overall energy-consumption of both scenarios subject to FL time constraint. Towards this goal, we devise a joint optimization of scheduling of IoT devices with the F-APs, transmit power allocation, computation frequency allocation at the devices and F-APs and decouple it into two subproblems. In the first subproblem, we optimize the IoT device scheduling and power allocation, while in the second subproblem, we optimize the computation frequency allocation. For each scenario, we develop a conflict graph based solution to iteratively solve the two subproblems. Simulation results show that the proposed two schemes achieve a considerable performance gain in terms of the energy consumption minimization. The presented simulation results interestingly reveal that for a large number of IoT devices and large data sizes, it is more energy efficient to train the local models at the IoT devices instead of the F-APs.
We consider the problem to learn a concept or a query in the presence of an ontology formulated in the description logic ELr, in Angluin's framework of active learning that allows the learning algorithm to interactively query an oracle (such as a domain expert). We show that the following can be learned in polynomial time: (1) EL-concepts, (2) symmetry-free ELI-concepts, and (3) conjunctive queries (CQs) that are chordal, symmetry-free, and of bounded arity. In all cases, the learner can pose to the oracle membership queries based on ABoxes and equivalence queries that ask whether a given concept/query from the considered class is equivalent to the target. The restriction to bounded arity in (3) can be removed when we admit unrestricted CQs in equivalence queries. We also show that EL-concepts are not polynomial query learnable in the presence of ELI-ontologies.
This paper considers joint device activity detection and channel estimation in Internet of Things (IoT) networks, where a large number of IoT devices exist but merely a random subset of them become active for short-packet transmission at each time slot. In particular, we propose to leverage the temporal correlation in user activity, i.e., a device active at the previous time slot is more likely to be still active at the current moment, to improve the detection performance. Despite the temporally-correlated user activity in consecutive time slots, it is challenging to unveil the connection between the activity pattern estimated previously, which is imperfect but the only available side information (SI), and the true activity pattern at the current moment due to the unknown estimation error. In this work, we manage to tackle this challenge under the framework of approximate message passing (AMP). Specifically, thanks to the state evolution, the correlation between the activity pattern estimated by AMP at the previous time slot and the real activity pattern at the previous and current moment is quantified explicitly. Based on the well-defined temporal correlation, we further manage to embed this useful SI into the design of the minimum mean-squared error (MMSE) denoisers and log-likelihood ratio (LLR) test based activity detectors under the AMP framework. Theoretical comparison between the SI-aided AMP algorithm and its counterpart without utilizing temporal correlation is provided. Moreover, numerical results are given to show the significant gain in activity detection accuracy brought by the SI-aided algorithm.
We present AlphaChute: a state-of-the-art algorithm that achieves superhuman performance in the ancient game of Chutes and Ladders. We prove that our algorithm converges to the Nash equilibrium in constant time, and therefore is -- to the best of our knowledge -- the first such formal solution to this game. Surprisingly, despite all this, our implementation of AlphaChute remains relatively straightforward due to domain-specific adaptations. We provide the source code for AlphaChute here in our Appendix.