Alert button
Picture for Vincent K. N. Lau

Vincent K. N. Lau

Alert button

A unified framework for STAR-RIS coefficients optimization

Oct 13, 2023
Hancheng Zhu, Yuanwei Liu, Yik Chung Wu, Vincent K. N. Lau

Simultaneously transmitting and reflecting (STAR) reconfigurable intelligent surface (RIS), which serves users located on both sides of the surface, has recently emerged as a promising enhancement to the traditional reflective only RIS. Due to the lack of a unified comparison of communication systems equipped with different modes of STAR-RIS and the performance degradation caused by the constraints involving discrete selection, this paper proposes a unified optimization framework for handling the STAR-RIS operating mode and discrete phase constraints. With a judiciously introduced penalty term, this framework transforms the original problem into two iterative subproblems, with one containing the selection-type constraints, and the other subproblem handling other wireless resource. Convergent point of the whole algorithm is found to be at least a stationary point under mild conditions. As an illustrative example, the proposed framework is applied to a sum-rate maximization problem in the downlink transmission. Simulation results show that the algorithms from the proposed framework outperform other existing algorithms tailored for different STAR-RIS scenarios. Furthermore, it is found that 4 or even 2 discrete phases STAR-RIS could achieve almost the same sum-rate performance as the continuous phase setting, showing for the first time that discrete phase is not necessarily a cause of significant performance degradation.

Viaarxiv icon

Structured Bayesian Compression for Deep Neural Networks Based on The Turbo-VBI Approach

Feb 21, 2023
Chengyu Xia, Danny H. K. Tsang, Vincent K. N. Lau

Figure 1 for Structured Bayesian Compression for Deep Neural Networks Based on The Turbo-VBI Approach
Figure 2 for Structured Bayesian Compression for Deep Neural Networks Based on The Turbo-VBI Approach
Figure 3 for Structured Bayesian Compression for Deep Neural Networks Based on The Turbo-VBI Approach
Figure 4 for Structured Bayesian Compression for Deep Neural Networks Based on The Turbo-VBI Approach

With the growth of neural network size, model compression has attracted increasing interest in recent research. As one of the most common techniques, pruning has been studied for a long time. By exploiting the structured sparsity of the neural network, existing methods can prune neurons instead of individual weights. However, in most existing pruning methods, surviving neurons are randomly connected in the neural network without any structure, and the non-zero weights within each neuron are also randomly distributed. Such irregular sparse structure can cause very high control overhead and irregular memory access for the hardware and even increase the neural network computational complexity. In this paper, we propose a three-layer hierarchical prior to promote a more regular sparse structure during pruning. The proposed three-layer hierarchical prior can achieve per-neuron weight-level structured sparsity and neuron-level structured sparsity. We derive an efficient Turbo-variational Bayesian inferencing (Turbo-VBI) algorithm to solve the resulting model compression problem with the proposed prior. The proposed Turbo-VBI algorithm has low complexity and can support more general priors than existing model compression algorithms. Simulation results show that our proposed algorithm can promote a more regular structure in the pruned neural networks while achieving even better performance in terms of compression rate and inferencing accuracy compared with the baselines.

Viaarxiv icon

Optimized Design for IRS-Assisted Integrated Sensing and Communication Systems in Clutter Environments

Aug 08, 2022
Chikun Liao, Feng Wang, Vincent K. N. Lau

Figure 1 for Optimized Design for IRS-Assisted Integrated Sensing and Communication Systems in Clutter Environments
Figure 2 for Optimized Design for IRS-Assisted Integrated Sensing and Communication Systems in Clutter Environments
Figure 3 for Optimized Design for IRS-Assisted Integrated Sensing and Communication Systems in Clutter Environments
Figure 4 for Optimized Design for IRS-Assisted Integrated Sensing and Communication Systems in Clutter Environments

In this paper, we investigate an intelligent reflecting surface (IRS)-assisted integrated sensing and communication (ISAC) system design in a clutter environment. Assisted by an IRS equipped with a uniform linear array (ULA), a multi-antenna base station (BS) is targeted for communicating with multiple communication users (CUs) and sensing multiple targets simultaneously. We consider the IRS-assisted ISAC design in the case with Type-I or Type-II CUs, where each Type-I and Type-II CU can and cannot cancel the interference from sensing signals, respectively. In particular, we aim to maximize the minimum sensing beampattern gain among multiple targets, by jointly optimizing the BS transmit beamforming vectors and the IRS phase shifting matrix, subject to the signal-to-interference-plus-noise ratio (SINR) constraint for each Type-I/Type-II CU, the interference power constraint per clutter, the transmission power constraint at the BS, and the cross-correlation pattern constraint. Due to the coupling of the BS's transmit design variables and the IRS's phase shifting matrix, the formulated max-min IRS-assisted ISAC design problem in the case with Type-I/Type-II CUs is highly non-convex. As such, we propose an efficient algorithm based on the alternating-optimization and semi-definite relaxation (SDR) techniques. In the case with Type-I CUs, we show that the dedicated sensing signal at the BS is always beneficial to improve the sensing performance. By contrast, the dedicated sensing signal at the BS is not required in the case with Type-II CUs. Numerical results are provided to show that the proposed IRS-assisted ISAC design schemes achieve a significant gain over the existing benchmark schemes.

* 28 pages, 9 figures, single-column full paper 
Viaarxiv icon

Sequential Offloading for Distributed DNN Computation in Multiuser MEC Systems

Mar 02, 2022
Feng Wang, Songfu Cai, Vincent K. N. Lau

Figure 1 for Sequential Offloading for Distributed DNN Computation in Multiuser MEC Systems
Figure 2 for Sequential Offloading for Distributed DNN Computation in Multiuser MEC Systems
Figure 3 for Sequential Offloading for Distributed DNN Computation in Multiuser MEC Systems
Figure 4 for Sequential Offloading for Distributed DNN Computation in Multiuser MEC Systems

This paper studies a sequential task offloading problem for a multiuser mobile edge computing (MEC) system. We consider a dynamic optimization approach, which embraces wireless channel fluctuations and random deep neural network (DNN) task arrivals over an infinite horizon. Specifically, we introduce a local CPU workload queue (WD-QSI) and an MEC server workload queue (MEC-QSI) to model the dynamic workload of DNN tasks at each WD and the MEC server, respectively. The transmit power and the partitioning of the local DNN task at each WD are dynamically determined based on the instantaneous channel conditions (to capture the transmission opportunities) and the instantaneous WD-QSI and MEC-QSI (to capture the dynamic urgency of the tasks) to minimize the average latency of the DNN tasks. The joint optimization can be formulated as an ergodic Markov decision process (MDP), in which the optimality condition is characterized by a centralized Bellman equation. However, the brute force solution of the MDP is not viable due to the curse of dimensionality as well as the requirement for knowledge of the global state information. To overcome these issues, we first decompose the MDP into multiple lower dimensional sub-MDPs, each of which can be associated with a WD or the MEC server. Next, we further develop a parametric online Q-learning algorithm, so that each sub-MDP is solved locally at its associated WD or the MEC server. The proposed solution is completely decentralized in the sense that the transmit power for sequential offloading and the DNN task partitioning can be determined based on the local channel state information (CSI) and the local WD-QSI at the WD only. Additionally, no prior knowledge of the distribution of the DNN task arrivals or the channel statistics will be needed for the MEC server.

* 13 pages, 9 figures, double-column 
Viaarxiv icon

Over-the-Air Aggregation for Federated Learning: Waveform Superposition and Prototype Validation

Oct 27, 2021
Huayan Guo, Yifan Zhu, Haoyu Ma, Vincent K. N. Lau, Kaibin Huang, Xiaofan Li, Huabin Nong, Mingyu Zhou

Figure 1 for Over-the-Air Aggregation for Federated Learning: Waveform Superposition and Prototype Validation
Figure 2 for Over-the-Air Aggregation for Federated Learning: Waveform Superposition and Prototype Validation
Figure 3 for Over-the-Air Aggregation for Federated Learning: Waveform Superposition and Prototype Validation
Figure 4 for Over-the-Air Aggregation for Federated Learning: Waveform Superposition and Prototype Validation

In this paper, we develop an orthogonal-frequency-division-multiplexing (OFDM)-based over-the-air (OTA) aggregation solution for wireless federated learning (FL). In particular, the local gradients in massive IoT devices are modulated by an analog waveform and are then transmitted using the same wireless resources. To this end, achieving perfect waveform superposition is the key challenge, which is difficult due to the existence of frame timing offset (TO) and carrier frequency offset (CFO). In order to address these issues, we propose a two-stage waveform pre-equalization technique with a customized multiple access protocol that can estimate and then mitigate the TO and CFO for the OTA aggregation. Based on the proposed solution, we develop a hardware transceiver and application software to train a real-world FL task, which learns a deep neural network to predict the received signal strength with global positioning system information. Experiments verify that the proposed OTA aggregation solution can achieve comparable performance to offline learning procedures with high prediction accuracy.

Viaarxiv icon

Cascaded Channel Estimation for Intelligent Reflecting Surface Assisted Multiuser MISO Systems

Aug 20, 2021
Huayan Guo, Vincent K. N. Lau

Figure 1 for Cascaded Channel Estimation for Intelligent Reflecting Surface Assisted Multiuser MISO Systems
Figure 2 for Cascaded Channel Estimation for Intelligent Reflecting Surface Assisted Multiuser MISO Systems
Figure 3 for Cascaded Channel Estimation for Intelligent Reflecting Surface Assisted Multiuser MISO Systems
Figure 4 for Cascaded Channel Estimation for Intelligent Reflecting Surface Assisted Multiuser MISO Systems

This paper investigates the uplink cascaded channel estimation for intelligent-reflecting-surface (IRS)-assisted multi-user multiple-input-single-output systems. We focus on a sub-6 GHz scenario where the channel propagation is not sparse and the number of IRS elements can be larger than the number of BS antennas. A novel channel estimation protocol without the need of on-off amplitude control to avoid the reflection power loss is proposed. In addition, the pilot overhead is substantially reduced by exploiting the common-link structure to decompose the cascaded channel coefficients by the multiplication of the common-link variables and the user-specific variables. However, these two types of variables are highly coupled, which makes them difficult to estimate. To address this issue, we formulate an optimization-based joint channel estimation problem, which only utilizes the covariance of the cascaded channel. Then, we design a low-complexity alternating optimization algorithm with efficient initialization for the non-convex optimization problem, which achieves a local optimum solution. To further enhance the estimation accuracy, we propose a new formulation to optimize the training phase shifting configuration for the proposed protocol, and then solve it using the successive convex approximation algorithm. Comprehensive simulations verify that the proposed algorithm has supreme performance compared to various state-of-the-art baseline schemes.

* 13 pages, 10 figures 
Viaarxiv icon

Dynamic RAT Selection and Transceiver Optimization for Mobile Edge Computing Over Multi-RAT Heterogeneous Networks

Aug 18, 2021
Feng Wang, Vincent K. N. Lau

Figure 1 for Dynamic RAT Selection and Transceiver Optimization for Mobile Edge Computing Over Multi-RAT Heterogeneous Networks
Figure 2 for Dynamic RAT Selection and Transceiver Optimization for Mobile Edge Computing Over Multi-RAT Heterogeneous Networks
Figure 3 for Dynamic RAT Selection and Transceiver Optimization for Mobile Edge Computing Over Multi-RAT Heterogeneous Networks
Figure 4 for Dynamic RAT Selection and Transceiver Optimization for Mobile Edge Computing Over Multi-RAT Heterogeneous Networks

Mobile edge computing (MEC) integrated with multiple radio access technologies (RATs) is a promising technique for satisfying the growing low-latency computation demand of emerging intelligent internet of things (IoT) applications. Under the distributed MapReduce framework, this paper investigates the joint RAT selection and transceiver design for over-the-air (OTA) aggregation of intermediate values (IVAs) in wireless multiuser MEC systems, while taking into account the energy budget constraint for the local computing and IVA transmission per wireless device (WD). We aim to minimize the weighted sum of the computation mean squared error (MSE) of the aggregated IVA at the RAT receivers, the WDs' IVA transmission cost, and the associated transmission time delay, which is a mixed-integer and non-convex problem. Based on the Lagrange duality method and primal decomposition, we develop a low-complexity algorithm by solving the WDs' RAT selection problem, the WDs' transmit coefficients optimization problem, and the aggregation beamforming problem. Extensive numerical results are provided to demonstrate the effectiveness and merit of our proposed algorithm as compared with other existing schemes.

* 14 pages, 12 figures, double-column, and submitted for publication 
Viaarxiv icon

Multi-Level Over-the-Air Aggregation of Mobile Edge Computing over D2D Wireless Networks

May 02, 2021
Feng Wang, Vincent K. N. Lau

Figure 1 for Multi-Level Over-the-Air Aggregation of Mobile Edge Computing over D2D Wireless Networks
Figure 2 for Multi-Level Over-the-Air Aggregation of Mobile Edge Computing over D2D Wireless Networks
Figure 3 for Multi-Level Over-the-Air Aggregation of Mobile Edge Computing over D2D Wireless Networks
Figure 4 for Multi-Level Over-the-Air Aggregation of Mobile Edge Computing over D2D Wireless Networks

In this paper, we consider a wireless multihop device-to-device (D2D) based mobile edge computing (MEC) system, where the destination wireless device (WD) is scheduled to compute nomographic functions. Under the MapReduce framework and motivated by reducing communication resource overhead, we propose a new multi-level over-the-air (OTA) aggregation scheme for the destination WD to collect the individual partially aggregated intermediate values (IVAs) for reduction from multiple source WDs in the data shuffling phase. For OTA aggregation per level, the source WDs employ a channel inverse structure multiplied by their individual transmit coefficients in transmission over the same time frequency resource blocks, and the destination WD finally uses a receive filtering factor to construct the aggregated IVA. Under this setup, we develop a unified transceiver design framework that minimizes the mean squared error (MSE) of the aggregated IVA at the destination WD subject to the source WDs' individual power constraints, by jointly optimizing the source WDs' individual transmit coefficients and the destination WD's receive filtering factor. First, based on the primal decomposition method, we derive the closed-form solution under the special case of a common transmit coefficient. It shows that all the source WDs' common transmit is determined by the minimal transmit power budget among the source WDs. Next, for the general case, we transform the original problem into a quadratic fractional programming problem, and then develop a low-complexity algorithm to obtain the (near-) optimal solution by leveraging Dinkelbach's algorithm along with the Gaussian randomization method.

* 30 pages, 7 figures, and submitted for possible journal publication 
Viaarxiv icon

Turning Channel Noise into an Accelerator for Over-the-Air Principal Component Analysis

Apr 21, 2021
Zezhong Zhang, Guangxu Zhu, Rui Wang, Vincent K. N. Lau, Kaibin Huang

Figure 1 for Turning Channel Noise into an Accelerator for Over-the-Air Principal Component Analysis
Figure 2 for Turning Channel Noise into an Accelerator for Over-the-Air Principal Component Analysis
Figure 3 for Turning Channel Noise into an Accelerator for Over-the-Air Principal Component Analysis
Figure 4 for Turning Channel Noise into an Accelerator for Over-the-Air Principal Component Analysis

Recently years, the attempts on distilling mobile data into useful knowledge has been led to the deployment of machine learning algorithms at the network edge. Principal component analysis (PCA) is a classic technique for extracting the linear structure of a dataset, which is useful for feature extraction and data compression. In this work, we propose the deployment of distributed PCA over a multi-access channel based on the algorithm of stochastic gradient descent to learn the dominant feature space of a distributed dataset at multiple devices. Over-the-air aggregation is adopted to reduce the multi-access latency, giving the name over-the-air PCA. The novelty of this design lies in exploiting channel noise to accelerate the descent in the region around each saddle point encountered by gradient descent, thereby increasing the convergence speed of over-the-air PCA. The idea is materialized by proposing a power-control scheme which detects the type of descent region and controlling the level of channel noise accordingly. The scheme is proved to achieve a faster convergence rate than in the case without power control.

* 30 pages,9 figures 
Viaarxiv icon