Generative Adversarial Networks (GANs) are a popular formulation to train generative models for complex high dimensional data. The standard method for training GANs involves a gradient descent-ascent (GDA) procedure on a minimax optimization problem. This procedure is hard to analyze in general due to the nonlinear nature of the dynamics. We study the local dynamics of GDA for training a GAN with a kernel-based discriminator. This convergence analysis is based on a linearization of a non-linear dynamical system that describes the GDA iterations, under an \textit{isolated points model} assumption from [Becker et al. 2022]. Our analysis brings out the effect of the learning rates, regularization, and the bandwidth of the kernel discriminator, on the local convergence rate of GDA. Importantly, we show phase transitions that indicate when the system converges, oscillates, or diverges. We also provide numerical simulations that verify our claims.
Wideband millimeter-wave communication systems can be extended to provide radar-like sensing capabilities on top of data communication, in a cost-effective manner. However, the development of joint communication and sensing technology is hindered by practical challenges, such as occlusions to the line-of-sight path and clock asynchrony between devices. The latter introduces time-varying timing and frequency offsets that prevent the estimation of sensing parameters and, in turn, the use of standard signal processing solutions. Existing approaches cannot be applied to commonly used phased-array receivers, as they build on stringent assumptions about the multipath environment, and are computationally complex. We present JUMP, the first system enabling practical bistatic and asynchronous joint communication and sensing, while achieving accurate target tracking and micro-Doppler extraction in realistic conditions. Our system compensates for the timing offset by exploiting the channel correlation across subsequent packets. Further, it tracks multipath reflections and eliminates frequency offsets by observing the phase of a dynamically-selected static reference path. JUMP has been implemented on a 60 GHz experimental platform, performing extensive evaluations of human motion sensing, including non-line-of-sight scenarios. In our results, JUMP attains comparable tracking performance to a full-duplex monostatic system and similar micro-Doppler quality with respect to a phase-locked bistatic receiver.
In this paper, we study a navigation problem where a mobile robot needs to locate a mmWave wireless signal. Using the directionality properties of the signal, we propose an estimation and path planning algorithm that can efficiently navigate in cluttered indoor environments. We formulate Extended Kalman filters for emitter location estimation in cases where the signal is received in line-of-sight or after reflections. We then propose to plan motion trajectories based on belief-space dynamics in order to minimize the uncertainty of the position estimates. The associated non-linear optimization problem is solved by a state-of-the-art constrained iLQR solver. In particular, we propose a method that can handle a large number of obstacles (~300) with reasonable computation times. We validate the approach in an extensive set of simulations. We show that our estimators can help increase navigation success rate and that planning to reduce estimation uncertainty can improve the overall task completion speed.
Modern cellular systems rely increasingly on simultaneous communication in multiple discontinuous bands for macro-diversity and increased bandwidth. Multi-frequency communication is particularly crucial in the millimeter wave (mmWave) and Terahertz (THz) frequencies, as these bands are often coupled with lower frequencies for robustness. Evaluation of these systems requires statistical models that can capture the joint distribution of the channel paths across multiple frequencies. This paper presents a general neural network based methodology for training multi-frequency double directional statistical channel models. In the proposed approach, each is described as a multi-clustered set, and a generative adversarial network (GAN) is trained to generate random multi-cluster profiles where the generated cluster data includes the angles and delay of the clusters along with the vectors of random received powers, angular, and delay spread at different frequencies. The model can be readily applied for multi-frequency link or network layer simulation. The methodology is demonstrated on modeling urban micro-cellular links at 28 and 140 GHz trained from extensive ray tracing data. The methodology makes minimal statistical assumptions and experiments show the model can capture interesting statistical relationships between frequencies.
Generative Adversarial Networks (GANs) are a widely-used tool for generative modeling of complex data. Despite their empirical success, the training of GANs is not fully understood due to the min-max optimization of the generator and discriminator. This paper analyzes these joint dynamics when the true samples, as well as the generated samples, are discrete, finite sets, and the discriminator is kernel-based. A simple yet expressive framework for analyzing training called the $\textit{Isolated Points Model}$ is introduced. In the proposed model, the distance between true samples greatly exceeds the kernel width, so each generated point is influenced by at most one true point. Our model enables precise characterization of the conditions for convergence, both to good and bad minima. In particular, the analysis explains two common failure modes: (i) an approximate mode collapse and (ii) divergence. Numerical simulations are provided that predictably replicate these behaviors.
Site-specific radio frequency (RF) propagation prediction increasingly relies on models built from visual data such as cameras and LIDAR sensors. When operating in dynamic settings, the environment may only be partially observed. This paper introduces a method to extract statistical channel models, given partial observations of the surrounding environment. We propose a simple heuristic algorithm that performs ray tracing on the partial environment and then uses machine-learning trained predictors to estimate the channel and its uncertainty from features extracted from the partial ray tracing results. It is shown that the proposed method can interpolate between fully statistical models when no partial information is available and fully deterministic models when the environment is completely observed. The method can also capture the degree of uncertainty of the propagation predictions depending on the amount of region that has been explored. The methodology is demonstrated in a robotic navigation application simulated on a set of indoor maps with detailed models constructed using state-of-the-art navigation, simultaneous localization and mapping (SLAM), and computer vision methods.
High-rank line-of-sight (LOS) MIMO systems have attracted considerable attention for millimeter wave and THz communications. The small wavelengths in these frequencies enable spatial multiplexing with massive data rates at long distances. Such systems are also being considered for multi-path non-LOS (NLOS) environments. In these scenarios, standard channels models based on plane waves cannot capture the curvature of each wave front necessary to model spatial multiplexing. This work presents a novel and simple multi-path wireless channel parametrization where each path is replaced by a LOS path with a reflected image source. The model fully captures the spherical nature of each wave front and uses only two additional parameters relative to the standard plane wave model. Moreover, the parameters can be easily captured in standard ray tracing. The accuracy of the approach is demonstrated on detailed ray tracing simulations at 28GHz and 140GHz in a dense urban area.
Empirical observation of high dimensional phenomena, such as the double descent behaviour, has attracted a lot of interest in understanding classical techniques such as kernel methods, and their implications to explain generalization properties of neural networks. Many recent works analyze such models in a certain high-dimensional regime where the covariates are independent and the number of samples and the number of covariates grow at a fixed ratio (i.e. proportional asymptotics). In this work we show that for a large class of kernels, including the neural tangent kernel of fully connected networks, kernel methods can only perform as well as linear models in this regime. More surprisingly, when the data is generated by a kernel model where the relationship between input and the response could be very nonlinear, we show that linear models are in fact optimal, i.e. linear models achieve the minimum risk among all models, linear or nonlinear. These results suggest that more complex models for the data other than independent features are needed for high-dimensional analysis.
Power consumption is a key challenge in millimeter wave (mmWave) receiver front-ends, due to the need to support high dimensional antenna arrays at wide bandwidths. Recently, there has been considerable work in developing low-power front-ends, often based on low-resolution ADCs and low-power mixers. A critical but less studied consequence of such designs is the relatively low-dynamic range which in turn exposes the receiver to adjacent carrier interference and blockers. This paper provides a general mathematical framework for analyzing the performance of mmWave front-ends in the presence of out-of-band interference. The goal is to elucidate the fundamental trade-off of power consumption, interference tolerance and in-band performance. The analysis is combined with detailed network simulations in cellular systems with multiple carriers, as well as detailed circuit simulations of key components at 140 GHz. The analysis reveals critical bottlenecks for low-power interference robustness and suggests designs enhancements for use in practical systems.
Advanced wearable devices are increasingly incorporating high-resolution multi-camera systems. As state-of-the-art neural networks for processing the resulting image data are computationally demanding, there has been growing interest in leveraging fifth generation (5G) wireless connectivity and mobile edge computing for offloading this processing to the cloud. To assess this possibility, this paper presents a detailed simulation and evaluation of 5G wireless offloading for object detection within a powerful, new smart wearable called VIS4ION, for the Blind-and-Visually Impaired (BVI). The current VIS4ION system is an instrumented book-bag with high-resolution cameras, vision processing and haptic and audio feedback. The paper considers uploading the camera data to a mobile edge cloud to perform real-time object detection and transmitting the detection results back to the wearable. To determine the video requirements, the paper evaluates the impact of video bit rate and resolution on object detection accuracy and range. A new street scene dataset with labeled objects relevant to BVI navigation is leveraged for analysis. The vision evaluation is combined with a detailed full-stack wireless network simulation to determine the distribution of throughputs and delays with real navigation paths and ray-tracing from new high-resolution 3D models in an urban environment. For comparison, the wireless simulation considers both a standard 4G-Long Term Evolution (LTE) carrier and high-rate 5G millimeter-wave (mmWave) carrier. The work thus provides a thorough and realistic assessment of edge computing with mmWave connectivity in an application with both high bandwidth and low latency requirements.