Alert button
Picture for Zhi-Quan Luo

Zhi-Quan Luo

Alert button

An Efficient Decomposition Algorithm for Large-Scale Network Slicing

Jun 27, 2023
Wei-Kun Chen, Ya-Feng Liu, Rui-Jin Zhang, Yu-Hong Dai, Zhi-Quan Luo

Figure 1 for An Efficient Decomposition Algorithm for Large-Scale Network Slicing
Figure 2 for An Efficient Decomposition Algorithm for Large-Scale Network Slicing

In this paper, we consider the network slicing (NS) problem which attempts to map multiple customized virtual network requests to a common shared network infrastructure and allocate network resources to meet diverse service requirements. We propose an efficient decomposition algorithm for solving this NP-hard problem. The proposed algorithm decomposes the large-scale hard NS problem into two relatively easy function placement (FP) and traffic routing (TR) subproblems and iteratively solves them enabling information feedback between each other, which makes it particularly suitable to solve large-scale problems. Specifically, the FP subproblem is to place service functions into cloud nodes in the network, and solving it can return a function placement strategy based on which the TR subproblem is defined; and the TR subproblem is to find paths connecting two nodes hosting two adjacent functions in the network, and solving it can either verify that the solution of the FP subproblem is an optimal solution of the original problem, or return a valid inequality to the FP subproblem that cuts off the current infeasible solution. The proposed algorithm is guaranteed to find the global solution of the NS problem. We demonstrate the effectiveness and efficiency of the proposed algorithm via numerical experiments.

* 5 pages, 4 figures, accepted for publication in IEEE SPAWC 2023 
Viaarxiv icon

Provably Efficient Adversarial Imitation Learning with Unknown Transitions

Jun 11, 2023
Tian Xu, Ziniu Li, Yang Yu, Zhi-Quan Luo

Figure 1 for Provably Efficient Adversarial Imitation Learning with Unknown Transitions
Figure 2 for Provably Efficient Adversarial Imitation Learning with Unknown Transitions
Figure 3 for Provably Efficient Adversarial Imitation Learning with Unknown Transitions

Imitation learning (IL) has proven to be an effective method for learning good policies from expert demonstrations. Adversarial imitation learning (AIL), a subset of IL methods, is particularly promising, but its theoretical foundation in the presence of unknown transitions has yet to be fully developed. This paper explores the theoretical underpinnings of AIL in this context, where the stochastic and uncertain nature of environment transitions presents a challenge. We examine the expert sample complexity and interaction complexity required to recover good policies. To this end, we establish a framework connecting reward-free exploration and AIL, and propose an algorithm, MB-TAIL, that achieves the minimax optimal expert sample complexity of $\widetilde{O} (H^{3/2} |S|/\varepsilon)$ and interaction complexity of $\widetilde{O} (H^{3} |S|^2 |A|/\varepsilon^2)$. Here, $H$ represents the planning horizon, $|S|$ is the state space size, $|A|$ is the action space size, and $\varepsilon$ is the desired imitation gap. MB-TAIL is the first algorithm to achieve this level of expert sample complexity in the unknown transition setting and improves upon the interaction complexity of the best-known algorithm, OAL, by $O(H)$. Additionally, we demonstrate the generalization ability of MB-TAIL by extending it to the function approximation setting and proving that it can achieve expert sample and interaction complexity independent of $|S|$

* arXiv admin note: text overlap with arXiv:2106.10424 
Viaarxiv icon

Towards Memory- and Time-Efficient Backpropagation for Training Spiking Neural Networks

Mar 07, 2023
Qingyan Meng, Mingqing Xiao, Shen Yan, Yisen Wang, Zhouchen Lin, Zhi-Quan Luo

Figure 1 for Towards Memory- and Time-Efficient Backpropagation for Training Spiking Neural Networks
Figure 2 for Towards Memory- and Time-Efficient Backpropagation for Training Spiking Neural Networks
Figure 3 for Towards Memory- and Time-Efficient Backpropagation for Training Spiking Neural Networks
Figure 4 for Towards Memory- and Time-Efficient Backpropagation for Training Spiking Neural Networks

Spiking Neural Networks (SNNs) are promising energy-efficient models for neuromorphic computing. For training the non-differentiable SNN models, the backpropagation through time (BPTT) with surrogate gradients (SG) method has achieved high performance. However, this method suffers from considerable memory cost and training time during training. In this paper, we propose the Spatial Learning Through Time (SLTT) method that can achieve high performance while greatly improving training efficiency compared with BPTT. First, we show that the backpropagation of SNNs through the temporal domain contributes just a little to the final calculated gradients. Thus, we propose to ignore the unimportant routes in the computational graph during backpropagation. The proposed method reduces the number of scalar multiplications and achieves a small memory occupation that is independent of the total time steps. Furthermore, we propose a variant of SLTT, called SLTT-K, that allows backpropagation only at K time steps, then the required number of scalar multiplications is further reduced and is independent of the total time steps. Experiments on both static and neuromorphic datasets demonstrate superior training efficiency and performance of our SLTT. In particular, our method achieves state-of-the-art accuracy on ImageNet, while the memory cost and training time are reduced by more than 70% and 50%, respectively, compared with BPTT.

Viaarxiv icon

A Physics-based and Data-driven Approach for Localized Statistical Channel Modeling

Mar 04, 2023
Shutao Zhang, Xinzhi Ning, Xi Zheng, Qingjiang Shi, Tsung-Hui Chang, Zhi-Quan Luo

Figure 1 for A Physics-based and Data-driven Approach for Localized Statistical Channel Modeling
Figure 2 for A Physics-based and Data-driven Approach for Localized Statistical Channel Modeling
Figure 3 for A Physics-based and Data-driven Approach for Localized Statistical Channel Modeling
Figure 4 for A Physics-based and Data-driven Approach for Localized Statistical Channel Modeling

Localized channel modeling is crucial for offline performance optimization of 5G cellular networks, but the existing channel models are for general scenarios and do not capture local geographical structures. In this paper, we propose a novel physics-based and data-driven localized statistical channel modeling (LSCM), which is capable of sensing the physical geographical structures of the targeted cellular environment. The proposed channel modeling solely relies on the reference signal receiving power (RSRP) of the user equipment, unlike the traditional methods which use full channel impulse response matrices. The key is to build the relationship between the RSRP and the channel's angular power spectrum. Based on it, we formulate the task of channel modeling as a sparse recovery problem where the non-zero entries of the sparse vector indicate the channel paths' powers and angles of departure. A computationally efficient weighted non-negative orthogonal matching pursuit (WNOMP) algorithm is devised for solving the formulated problem. Finally, experiments based on synthetic and real RSRP measurements are presented to examine the performance of the proposed method.

* the 34th International Teletraffic Congress (ITC), Shenzhen, China, 2022 
Viaarxiv icon

Invariant Layers for Graphs with Nodes of Different Types

Feb 27, 2023
Dmitry Rybin, Ruoyu Sun, Zhi-Quan Luo

Figure 1 for Invariant Layers for Graphs with Nodes of Different Types
Figure 2 for Invariant Layers for Graphs with Nodes of Different Types
Figure 3 for Invariant Layers for Graphs with Nodes of Different Types
Figure 4 for Invariant Layers for Graphs with Nodes of Different Types

Neural networks that satisfy invariance with respect to input permutations have been widely studied in machine learning literature. However, in many applications, only a subset of all input permutations is of interest. For heterogeneous graph data, one can focus on permutations that preserve node types. We fully characterize linear layers invariant to such permutations. We verify experimentally that implementing these layers in graph neural network architectures allows learning important node interactions more effectively than existing techniques. We show that the dimension of space of these layers is given by a generalization of Bell numbers, extending the work (Maron et al., 2019). We further narrow the invariant network design space by addressing a question about the sizes of tensor layers necessary for function approximation on graph data. Our findings suggest that function approximation on a graph with $n$ nodes can be done with tensors of sizes $\leq n$, which is tighter than the best-known bound $\leq n(n-1)/2$. For $d \times d$ image data with translation symmetry, our methods give a tight upper bound $2d - 1$ (instead of $d^{4}$) on sizes of invariant tensor generators via a surprising connection to Davenport constants.

Viaarxiv icon

Coordinating Multiple Intelligent Reflecting Surfaces without Channel Information

Feb 20, 2023
Fan Xu, Jiawei Yao, Wenhai Lai, Kaiming Shen, Xin Li, Xin Chen, Zhi-Quan Luo

Figure 1 for Coordinating Multiple Intelligent Reflecting Surfaces without Channel Information
Figure 2 for Coordinating Multiple Intelligent Reflecting Surfaces without Channel Information
Figure 3 for Coordinating Multiple Intelligent Reflecting Surfaces without Channel Information
Figure 4 for Coordinating Multiple Intelligent Reflecting Surfaces without Channel Information

Conventional beamforming methods for intelligent reflecting surfaces (IRSs) or reconfigurable intelligent surfaces (RISs) typically entail the full channel state information (CSI). However, the computational cost of channel acquisition soars exponentially with the number of IRSs. To bypass this difficulty, we propose a novel strategy called blind beamforming that coordinates multiple IRSs by means of statistics without knowing CSI. Blind beamforming only requires measuring the received signal power at the user terminal for a sequence of randomly generated phase shifts across all IRSs. The main idea is to extract the key statistical quantity for beamforming by exploring only a small portion of the whole solution space of phase shifts. We show that blind beamforming guarantees a signal-to-noise ratio (SNR) boost of Theta(N^{2L}) under certain conditions, where L is the number of IRSs and N is the number of reflecting elements per IRS. The above result significantly improves upon the state of the art in the area of multi-IRS assisted communication. Moreover, blind beamforming is justified via field tests and simulations.

* 12 pages, 12 figures 
Viaarxiv icon

Theoretical Analysis of Offline Imitation With Supplementary Dataset

Jan 27, 2023
Ziniu Li, Tian Xu, Yang Yu, Zhi-Quan Luo

Figure 1 for Theoretical Analysis of Offline Imitation With Supplementary Dataset
Figure 2 for Theoretical Analysis of Offline Imitation With Supplementary Dataset
Figure 3 for Theoretical Analysis of Offline Imitation With Supplementary Dataset
Figure 4 for Theoretical Analysis of Offline Imitation With Supplementary Dataset

Behavioral cloning (BC) can recover a good policy from abundant expert data, but may fail when expert data is insufficient. This paper considers a situation where, besides the small amount of expert data, a supplementary dataset is available, which can be collected cheaply from sub-optimal policies. Imitation learning with a supplementary dataset is an emergent practical framework, but its theoretical foundation remains under-developed. To advance understanding, we first investigate a direct extension of BC, called NBCU, that learns from the union of all available data. Our analysis shows that, although NBCU suffers an imitation gap that is larger than BC in the worst case, there exist special cases where NBCU performs better than or equally well as BC. This discovery implies that noisy data can also be helpful if utilized elaborately. Therefore, we further introduce a discriminator-based importance sampling technique to re-weight the supplementary data, proposing the WBCU method. With our newly developed landscape-based analysis, we prove that WBCU can outperform BC in mild conditions. Empirical studies show that WBCU simultaneously achieves the best performance on two challenging tasks where prior state-of-the-art methods fail.

Viaarxiv icon

A Data Quality Assessment Framework for AI-enabled Wireless Communication

Dec 13, 2022
Hanning Tang, Liusha Yang, Rui Zhou, Jing Liang, Hong Wei, Xuan Wang, Qingjiang Shi, Zhi-Quan Luo

Figure 1 for A Data Quality Assessment Framework for AI-enabled Wireless Communication
Figure 2 for A Data Quality Assessment Framework for AI-enabled Wireless Communication
Figure 3 for A Data Quality Assessment Framework for AI-enabled Wireless Communication
Figure 4 for A Data Quality Assessment Framework for AI-enabled Wireless Communication

Using artificial intelligent (AI) to re-design and enhance the current wireless communication system is a promising pathway for the future sixth-generation (6G) wireless network. The performance of AI-enabled wireless communication depends heavily on the quality of wireless air-interface data. Although there are various approaches to data quality assessment (DQA) for different applications, none has been designed for wireless air-interface data. In this paper, we propose a DQA framework to measure the quality of wireless air-interface data from three aspects: similarity, diversity, and completeness. The similarity measures how close the considered datasets are in terms of their statistical distributions; the diversity measures how well-rounded a dataset is, while the completeness measures to what degree the considered dataset satisfies the required performance metrics in an application scenario. The proposed framework can be applied to various types of wireless air-interface data, such as channel state information (CSI), signal-to-interference-plus-noise ratio (SINR), reference signal received power (RSRP), etc. For simplicity, the validity of our proposed DQA framework is corroborated by applying it to CSI data and using similarity and diversity metrics to improve CSI compression and recovery in Massive MIMO systems.

Viaarxiv icon

Adversarial Rademacher Complexity of Deep Neural Networks

Nov 27, 2022
Jiancong Xiao, Yanbo Fan, Ruoyu Sun, Zhi-Quan Luo

Figure 1 for Adversarial Rademacher Complexity of Deep Neural Networks
Figure 2 for Adversarial Rademacher Complexity of Deep Neural Networks
Figure 3 for Adversarial Rademacher Complexity of Deep Neural Networks
Figure 4 for Adversarial Rademacher Complexity of Deep Neural Networks

Deep neural networks are vulnerable to adversarial attacks. Ideally, a robust model shall perform well on both the perturbed training data and the unseen perturbed test data. It is found empirically that fitting perturbed training data is not hard, but generalizing to perturbed test data is quite difficult. To better understand adversarial generalization, it is of great interest to study the adversarial Rademacher complexity (ARC) of deep neural networks. However, how to bound ARC in multi-layers cases is largely unclear due to the difficulty of analyzing adversarial loss in the definition of ARC. There have been two types of attempts of ARC. One is to provide the upper bound of ARC in linear and one-hidden layer cases. However, these approaches seem hard to extend to multi-layer cases. Another is to modify the adversarial loss and provide upper bounds of Rademacher complexity on such surrogate loss in multi-layer cases. However, such variants of Rademacher complexity are not guaranteed to be bounds for meaningful robust generalization gaps (RGG). In this paper, we provide a solution to this unsolved problem. Specifically, we provide the first bound of adversarial Rademacher complexity of deep neural networks. Our approach is based on covering numbers. We provide a method to handle the robustify function classes of DNNs such that we can calculate the covering numbers. Finally, we provide experiments to study the empirical implication of our bounds and provide an analysis of poor adversarial generalization.

Viaarxiv icon