Alert button
Picture for Weimin Wu

Weimin Wu

Alert button

A manifold learning-based CSI feedback framework for FDD massive MIMO

Apr 28, 2023
Yandi Cao, Haifan Yin, Ziao Qin, Weidong Li, Weimin Wu, Merouane Debbah

Figure 1 for A manifold learning-based CSI feedback framework for FDD massive MIMO
Figure 2 for A manifold learning-based CSI feedback framework for FDD massive MIMO
Figure 3 for A manifold learning-based CSI feedback framework for FDD massive MIMO
Figure 4 for A manifold learning-based CSI feedback framework for FDD massive MIMO

Massive multi-input multi-output (MIMO) in Frequency Division Duplex (FDD) mode suffers from heavy feedback overhead for Channel State Information (CSI). In this paper, a novel manifold learning-based CSI feedback framework (MLCF) is proposed to reduce the feedback and improve the spectral efficiency of FDD massive MIMO. Manifold learning (ML) is an effective method for dimensionality reduction. However, most ML algorithms focus only on data compression, and lack the corresponding recovery methods. Moreover, the computational complexity is high when dealing with incremental data. To solve these problems, we propose a landmark selection algorithm to characterize the topological skeleton of the manifold where the CSI sample resides. Based on the learned skeleton, the local patch of the incremental CSI on the manifold can be easily determined by its nearest landmarks. This motivates us to propose a low-complexity compression and reconstruction scheme by keeping the local geometric relationships with landmarks unchanged. We theoretically prove the convergence of the proposed algorithm. Meanwhile, the upper bound on the error of approximating the CSI samples using landmarks is derived. Simulation results under an industrial channel model of 3GPP demonstrate that the proposed MLCF method outperforms existing algorithms based on compressed sensing and deep learning.

* 12 pages, 5 figures 
Viaarxiv icon

Instance-aware Model Ensemble With Distillation For Unsupervised Domain Adaptation

Nov 15, 2022
Weimin Wu, Jiayuan Fan, Tao Chen, Hancheng Ye, Bo Zhang, Baopu Li

Figure 1 for Instance-aware Model Ensemble With Distillation For Unsupervised Domain Adaptation
Figure 2 for Instance-aware Model Ensemble With Distillation For Unsupervised Domain Adaptation
Figure 3 for Instance-aware Model Ensemble With Distillation For Unsupervised Domain Adaptation
Figure 4 for Instance-aware Model Ensemble With Distillation For Unsupervised Domain Adaptation

The linear ensemble based strategy, i.e., averaging ensemble, has been proposed to improve the performance in unsupervised domain adaptation tasks. However, a typical UDA task is usually challenged by dynamically changing factors, such as variable weather, views, and background in the unlabeled target domain. Most previous ensemble strategies ignore UDA's dynamic and uncontrollable challenge, facing limited feature representations and performance bottlenecks. To enhance the model, adaptability between domains and reduce the computational cost when deploying the ensemble model, we propose a novel framework, namely Instance aware Model Ensemble With Distillation, IMED, which fuses multiple UDA component models adaptively according to different instances and distills these components into a small model. The core idea of IMED is a dynamic instance aware ensemble strategy, where for each instance, a nonlinear fusion subnetwork is learned that fuses the extracted features and predicted labels of multiple component models. The nonlinear fusion method can help the ensemble model handle dynamically changing factors. After learning a large capacity ensemble model with good adaptability to different changing factors, we leverage the ensemble teacher model to guide the learning of a compact student model by knowledge distillation. Furthermore, we provide the theoretical analysis of the validity of IMED for UDA. Extensive experiments conducted on various UDA benchmark datasets, e.g., Office 31, Office Home, and VisDA 2017, show the superiority of the model based on IMED to the state of the art methods under the comparable computation cost.

* 12 pages 
Viaarxiv icon