Native jamming mitigation is essential for addressing security and resilience in future 6G wireless networks. In this paper a resilient-by-design framework for effective anti-jamming in MIMO-OFDM wireless communications is introduced. A novel approach that integrates information from wireless sensing services to develop anti-jamming strategies, which do not rely on any prior information or assumptions on the adversary's concrete setup, is explored. To this end, a method that replaces conventional approaches to noise covariance estimation in anti-jamming with a surrogate covariance model is proposed, which instead incorporates sensing information on the jamming signal's directions-of-arrival (DoAs) to provide an effective approximation of the true jamming strategy. The study further focuses on integrating this novel, sensing-assisted approach into the joint optimization of beamforming, user scheduling and power allocation for a multi-user MIMO-OFDM uplink setting. Despite the NP-hard nature of this optimization problem, it can be effectively solved using an iterative water-filling approach. In order to assess the effectiveness of the proposed sensing-assisted jamming mitigation, the corresponding worst-case jamming strategy is investigated, which aims to minimize the total user sum-rate. Experimental simulations eventually affirm the robustness of our approach against both worst-case and barrage jamming, demonstrating its potential to address a wide range of jamming scenarios. Since such an integration of sensing-assisted information is directly implemented on the physical layer, resilience is incorporated preemptively by-design.
This manuscript investigates the information-theoretic limits of integrated sensing and communications (ISAC), aiming for simultaneous reliable communication and precise channel state estimation. We model such a system with a state-dependent discrete memoryless channel (SD-DMC) with present or absent channel feedback and generalized side information at the transmitter and the receiver, where the joint task of message decoding and state estimation is performed at the receiver. The relationship between the achievable communication rate and estimation error, the capacity-distortion (C-D) trade-off, is characterized across different causality levels of the side information. This framework is shown to be capable of modeling various practical scenarios by assigning the side information with different meanings, including monostatic and bistatic radar systems. The analysis is then extended to the two-user degraded broadcast channel, and we derive an achievable C-D region that is tight under certain conditions. To solve the optimization problem arising in the computation of C-D functions/regions, we propose a proximal block coordinate descent (BCD) method, prove its convergence to a stationary point, and derive a stopping criterion. Finally, several representative examples are studied to demonstrate the versatility of our framework and the effectiveness of the proposed algorithm.
In this paper, a digital twinning framework for indoor integrated sensing, communications, and robotics is proposed, designed, and implemented. Besides leveraging powerful robotics and ray-tracing technologies, the framework also enables integration with real-world sensors and reactive updates triggered by changes in the environment. The framework is designed with commercial, off-the-shelf components in mind, thus facilitating experimentation in the different areas of communication, sensing, and robotics. Experimental results showcase the feasibility and accuracy of indoor localization using digital twins and validate our implementation both qualitatively and quantitatively.
Deep learning still has drawbacks in terms of trustworthiness, which describes a comprehensible, fair, safe, and reliable method. To mitigate the potential risk of AI, clear obligations associated to trustworthiness have been proposed via regulatory guidelines, e.g., in the European AI Act. Therefore, a central question is to what extent trustworthy deep learning can be realized. Establishing the described properties constituting trustworthiness requires that the factors influencing an algorithmic computation can be retraced, i.e., the algorithmic implementation is transparent. Motivated by the observation that the current evolution of deep learning models necessitates a change in computing technology, we derive a mathematical framework which enables us to analyze whether a transparent implementation in a computing model is feasible. We exemplarily apply our trustworthiness framework to analyze deep learning approaches for inverse problems in digital and analog computing models represented by Turing and Blum-Shub-Smale Machines, respectively. Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems under fairly general conditions, whereas Turing machines cannot guarantee trustworthiness to the same degree.
In this paper, we investigate the fundamental limits of MIMO-OFDM integrated sensing and communications (ISAC) systems based on a Bayesian Cram\'er-Rao bound (BCRB) analysis. We derive the BCRB for joint channel parameter estimation and data symbol detection, in which a performance trade-off between both functionalities is observed. We formulate the optimization problem for a linear precoder design and propose the stochastic Riemannian gradient descent (SRGD) approach to solve the non-convex problem. We analyze the optimality conditions and show that SRGD ensures convergence with high probability. The simulation results verify our analyses and also demonstrate a fast convergence speed. Finally, the performance trade-off is illustrated and investigated.
In this survey, we aim to explore the fundamental question of whether the next generation of artificial intelligence requires quantum computing. Artificial intelligence is increasingly playing a crucial role in many aspects of our daily lives and is central to the fourth industrial revolution. It is therefore imperative that artificial intelligence is reliable and trustworthy. However, there are still many issues with reliability of artificial intelligence, such as privacy, responsibility, safety, and security, in areas such as autonomous driving, healthcare, robotics, and others. These problems can have various causes, including insufficient data, biases, and robustness problems, as well as fundamental issues such as computability problems on digital hardware. The cause of these computability problems is rooted in the fact that digital hardware is based on the computing model of the Turing machine, which is inherently discrete. Notably, our findings demonstrate that digital hardware is inherently constrained in solving problems about optimization, deep learning, or differential equations. Therefore, these limitations carry substantial implications for the field of artificial intelligence, in particular for machine learning. Furthermore, although it is well known that the quantum computer shows a quantum advantage for certain classes of problems, our findings establish that some of these limitations persist when employing quantum computing models based on the quantum circuit or the quantum Turing machine paradigm. In contrast, analog computing models, such as the Blum-Shub-Smale machine, exhibit the potential to surmount these limitations.
We address the resilience of future 6G MIMO communications by considering an uplink scenario where multiple legitimate transmitters try to communicate with a base station in the presence of an adversarial jammer. The jammer possesses full knowledge about the system and the physical parameters of the legitimate link, while the base station only knows the UL-channels and the angle-of-arrival (AoA) of the jamming signals. Furthermore, the legitimate transmitters are oblivious to the fact that jamming takes place, thus the burden of guaranteeing resilience falls on the receiver. For this case we derive one optimal jamming strategy that aims to minimize the rate of the strongest user and multiple receive strategies, one based on a lower bound on the achievable signal-to-interference-to-noise-ratio (SINR), one based on a zero-forcing (ZF) design, and one based on a minimum SINR constraint. Numerical studies show that the proposed anti-jamming approaches ensure that the sum rate of the system is much higher than without protection, even when the jammer has considerably more transmit power and even if the jamming signals come from the same direction as those of the legitimate users.
Wireless channel sensing is one of the key enablers for integrated sensing and communication (ISAC) which helps communication networks understand the surrounding environment. In this work, we consider MIMO-OFDM systems and aim to design optimal and robust waveforms for accurate channel parameter estimation given allocated OFDM resources. The Fisher information matrix (FIM) is derived first, and the waveform design problem is formulated by maximizing the log determinant of the FIM. We then consider the uncertainty in the parameters and state the stochastic optimization problem for a robust design. We propose the Riemannian Exact Penalty Method via Smoothing (REPMS) and its stochastic version SREPMS to solve the constrained non-convex problems. In simulations, we show that the REPMS yields comparable results to the semidefinite relaxation (SDR) but with a much shorter running time. Finally, the designed robust waveforms using SREMPS are investigated, and are shown to have a good performance under channel perturbations.
Optimization problems are a staple of today's scientific and technical landscape. However, at present, solvers of such problems are almost exclusively run on digital hardware. Using Turing machines as a mathematical model for any type of digital hardware, in this paper, we analyze fundamental limitations of this conceptual approach of solving optimization problems. Since in most applications, the optimizer itself is of significantly more interest than the optimal value of the corresponding function, we will focus on computability of the optimizer. In fact, we will show that in various situations the optimizer is unattainable on Turing machines and consequently on digital computers. Moreover, even worse, there does not exist a Turing machine, which approximates the optimizer itself up to a certain constant error. We prove such results for a variety of well-known problems from very different areas, including artificial intelligence, financial mathematics, and information theory, often deriving the even stronger result that such problems are not Banach-Mazur computable, also not even in an approximate sense.