Abstract:Agentic artificial intelligence (AI) is emerging as a key enabler for autonomous radio access networks (RANs), where multiple large language model (LLM)-based agents reason and collaborate to achieve operator-defined intents. The open RAN (O-RAN) architecture enables the deployment and coordination of such agents. However, most existing works consider simple intents handled by independent agents, while complex intents that require coordination among agents remain unexplored. In this paper, we propose an agentic AI framework for intent translation and optimization in cell-free O-RAN. A supervisor agent translates the operator intents into an optimization objective and minimum rate requirements. Based on this information, a user weighting agent retrieves relevant prior experience from a memory module to determine the user priority weights for precoding. If the intent includes an energy-saving objective, then an open radio unit (O-RU) management agent will also be activated to determine the set of active O-RUs by using a deep reinforcement learning (DRL) algorithm. A monitoring agent measures and monitors the user data rates and coordinates with other agents to guarantee the minimum rate requirements are satisfied. To enhance scalability, we adopt a parameter-efficient fine-tuning (PEFT) method that enables the same underlying LLM to be used for different agents. Simulation results show that the proposed agentic AI framework reduces the number of active O-RUs by 41.93% when compared with three baseline schemes in energy-saving mode. Using the PEFT method, the proposed framework reduces the memory usage by 92% when compared with deploying separate LLM agents.
Abstract:Cell-free massive multiple-input multiple-output (MIMO) is a key technology for next-generation wireless systems. The integration of cell-free massive MIMO within the open radio access network (O-RAN) architecture addresses the growing need for decentralized, scalable, and high-capacity networks that can support different use cases. Precoding is a crucial step in the operation of cell-free massive MIMO, where O-RUs steer their beams towards the intended users while mitigating interference to other users. Current precoding schemes for cell-free massive MIMO are either fully centralized or fully distributed. Centralized schemes are not scalable, whereas distributed schemes may lead to a high inter-O-RU interference. In this paper, we propose a distributed and scalable precoding framework for cell-free massive MIMO that uses limited information exchange among precoding agents to mitigate interference. We formulate an optimization problem for precoding that maximizes the aggregate throughput while guaranteeing the minimum data rate requirements of users. The formulated problem is nonconvex. We propose a multi-timescale framework that combines multi-agent deep reinforcement learning (DRL) with expert insights from an iterative algorithm to determine the precoding matrices efficiently. We conduct simulations and compare the proposed framework with the centralized precoding and distributed precoding methods for different numbers of O-RUs, users, and transmit antennas. The results show that the proposed framework achieves a higher aggregate throughput than the distributed regularized zero-forcing (D-RZF) scheme and the weighted minimum mean square error (WMMSE) algorithm. When compared with the centralized regularized zero-forcing (C-RZF) scheme, the proposed framework achieves similar aggregate throughput performance but with a lower signaling overhead.