Abstract:Low-altitude communication networks (LACNs) serve as the critical infrastructure of the emerging low-altitude economy (LAE), supporting services such as drone delivery and infrastructure inspection. However, LACNs operate in highly dynamic three-dimensional (3D) environments characterized by high mobility and predominantly line-of-sight (LoS) propagation, creating strong coupling among key performance objectives including coverage, interference mitigation, handover management, and sensing capability. Isolated tuning of individual objectives cannot capture these cross-objective interactions, rendering conventional approaches based on experience-driven tuning and repeated field trials inefficient and costly. To address these challenges, we propose DT-MOO, a Digital Twin-based Multi-Objective Optimization framework for LACNs. By constructing a high-fidelity virtual replica that integrates realistic environmental models, electromagnetic (EM) propagation, and traffic dynamics within a unified environment, DT-MOO enables joint evaluation and systematic optimization of interdependent network parameters, scoring candidate configurations by their combined effect on multiple objectives. As the foundational validation of the framework, we report real-world experiments in a 5G-enabled LACN focusing on coverage-interference co-optimization, where DT-MOO increases the high-quality coverage rate from 14.0% to 52.9% across all evaluated altitudes compared to an operator-provisioned, experience-based baseline, while achieving a net SINR gain under stringent criteria despite local spatial trade-offs, confirming its ability to handle coupled objectives in practical LACN deployment.
Abstract:Existing LLM-based Kubernetes diagnostic systems cannot learn from operational experience, operating on static knowledge bases without improving from past resolutions. We present MetaKube, an experience-aware LLM framework through three synergistic innovations: (1) an Episodic Pattern Memory Network (EPMN) that abstracts diagnostic patterns from historical resolutions and provides confidence-calibrated retrieval for both rapid pattern matching and guided causal exploration, (2) a meta-cognitive controller that dynamically routes between intuitive and analytical pathways based on problem familiarity, optimizing the trade-off between speed and depth, and (3) KubeLLM, a locally-deployable 8B model enhanced through domain-specific post-training on our 7,000-sample Kubernetes Fault Resolution Dataset. Evaluation on 1,873 real-world scenarios demonstrates MetaKube transforms Qwen3-8B from 50.9 to 90.5 points, approaching GPT-4.1 performance while ensuring complete data privacy. EPMN contributes 15.3% improvement through experiential learning, with continuous learning experiments showing progressive gains as the system accumulates operational knowledge. The source code and related resources are available at https://github.com/MetaKube-LLM-for-Kubernetes-Diagnosis/MetaKube.