Extended reality (XR) is one of the most important applications of beyond 5G and 6G networks. Real-time XR video transmission presents challenges in terms of data rate and delay. In particular, the frame-by-frame transmission mode of XR video makes real-time XR video very sensitive to dynamic network environments. To improve the users' quality of experience (QoE), we design a cross-layer transmission framework for real-time XR video. The proposed framework allows the simple information exchange between the base station (BS) and the XR server, which assists in adaptive bitrate and wireless resource scheduling. We utilize the cross-layer information to formulate the problem of maximizing user QoE by finding the optimal scheduling and bitrate adjustment strategies. To address the issue of mismatched time scales between two strategies, we decouple the original problem and solve them individually using a multi-agent-based approach. Specifically, we propose the multi-step Deep Q-network (MS-DQN) algorithm to obtain a frame-priority-based wireless resource scheduling strategy and then propose the Transformer-based Proximal Policy Optimization (TPPO) algorithm for video bitrate adaptation. The experimental results show that the TPPO+MS-DQN algorithm proposed in this study can improve the QoE by 3.6% to 37.8%. More specifically, the proposed MS-DQN algorithm enhances the transmission quality by 49.9%-80.2%.
Extended reality (XR) is one of the most important applications of 5G. For real-time XR video transmission in 5G networks, a low latency and high data rate are required. In this paper, we propose a resource allocation scheme based on frame-priority scheduling to meet these requirements. The optimization problem is modelled as a frame-priority-based radio resource scheduling problem to improve transmission quality. We propose a scheduling framework based on multi-step Deep Q-network (MS-DQN) and design a neural network model based on convolutional neural network (CNN). Simulation results show that the scheduling framework based on frame-priority and MS-DQN can improve transmission quality by 49.9%-80.2%.
With the rapid development of indoor location-based services (LBSs), the demand for accurate localization keeps growing as well. To meet this demand, we propose an indoor localization algorithm based on graph convolutional network (GCN). We first model access points (APs) and the relationships between them as a graph, and utilize received signal strength indication (RSSI) to make up fingerprints. Then the graph and the fingerprint will be put into GCN for feature extraction, and get classification by multilayer perceptron (MLP).In the end, experiments are performed under a 2D scenario and 3D scenario with floor prediction. In the 2D scenario, the mean distance error of GCN-based method is 11m, which improves by 7m and 13m compare with DNN-based and CNN-based schemes respectively. In the 3D scenario, the accuracy of predicting buildings and floors are up to 99.73% and 93.43% respectively. Moreover, in the case of predicting floors and buildings correctly, the mean distance error is 13m, which outperforms DNN-based and CNN-based schemes, whose mean distance errors are 34m and 26m respectively.
Transmission control protocol (TCP) congestion control is one of the key techniques to improve network performance. TCP congestion control algorithm identification (TCP identification) can be used to significantly improve network efficiency. Existing TCP identification methods can only be applied to limited number of TCP congestion control algorithms and focus on wired networks. In this paper, we proposed a machine learning based passive TCP identification method for wired and wireless networks. After comparing among three typical machine learning models, we concluded that the 4-layers Long Short Term Memory (LSTM) model achieves the best identification accuracy. Our approach achieves better than 98% accuracy in wired and wireless networks and works for newly proposed TCP congestion control algorithms.