Abstract:Channel knowledge maps (CKMs) provide a site-specific, location-indexed knowledge base that supports environment-aware communications and sensing in 6G networks. In practical deployments, CKM observations are often noisy and irregular due to coverage-induced sparsity and hardware-induced linear/nonlinear degradations. Conventional end-to-end algorithms couple CKM prior information with task- and device-specific observations, and require labeled data and separate training for each construction configuration, which is expensive and therefore incompatible with scalable edge deployments. Motivated by the trends toward cloud-edge collaboration and the Artificial Intelligence - Radio Access Network (AI-RAN) paradigm, we develop a cloud-edge collaborative framework for scalable CKM construction, which enables knowledge sharing across tasks, devices, and regions by explicitly decoupling a generalizable CKM prior from the information provided by local observations. A foundation model is trained once in the cloud using unlabeled data to learn a generalizable CKM prior. During inference, edge nodes combine the shared prior with local observations. Experiments on the CKMImageNet dataset show that the proposed method achieves competitive construction accuracy while substantially reducing training cost and data requirements, mitigating negative transfer, and offering clear advantages in generalization and deployment scalability.
Abstract:Composed Image Retrieval (CIR) is a cross-modal task that aims to retrieve target images from large-scale databases using a reference image and a modification text. Most existing methods rely on a single model to perform feature fusion and similarity matching. However, this paradigm faces two major challenges. First, one model alone can't see the whole picture and the tiny details at the same time; it has to handle different tasks with the same weights, so it often misses the small but important links between image and text. Second, the absence of dynamic weight allocation prevents adaptive leveraging of complementary model strengths, so the resulting embedding drifts away from the target and misleads the nearest-neighbor search in CIR. To address these limitations, we propose Dynamic Adaptive Fusion (DAFM) for multi-model collaboration in CIR. Rather than optimizing a single method in isolation, DAFM exploits the complementary strengths of heterogeneous models and adaptively rebalances their contributions. This not only maximizes retrieval accuracy but also ensures that the performance gains are independent of the fusion order, highlighting the robustness of our approach. Experiments on the CIRR and FashionIQ benchmarks demonstrate consistent improvements. Our method achieves a Recall@10 of 93.21 and an Rmean of 84.43 on CIRR, and an average Rmean of 67.48 on FashionIQ, surpassing recent strong baselines by up to 4.5%. These results confirm that dynamic multi-model collaboration provides an effective and general solution for CIR.