Abstract:Post-training model quantization is a widely adopted technique for reducing the memory and computational costs of large language models (LLMs). However, most existing methods rely on uniform or heuristic bitwidth assignments, failing to account for the nonuniform sensitivity of weights to quantization noise. In this paper, we propose a novel framework for allocating quantization bitwidths based on sensitivity metrics derived from a Hessian proxy. We make key assumptions, which allow the layer/component-wise loss function to be expressed as an explicit function of the bitwidths. This enables a neat formulation of the bit allocation problem as a convex optimization task, whose closed-form solution adapts precision across weights to minimize the layer-wise quantization loss. Inspecting the solution provides several insights (such as the equal-loss structure), which are then exploited to design the proposed \textbf{BAQ} (Bit Allocation Quantization) algorithm. The proposed algorithm achieves a good trade-off between loss minimization and complexity and allows BAQ to be integrated into standard quantization pipelines with minimal overhead. Experimental results show that BAQ consistently outperforms GPTQ, achieving up to 56$\times$ lower perplexity at the same bitwidth on large language models ranging from 125M to 30B parameters. Leveraging our analytical results derived from solving the optimal bit allocation problem, we also provide a theoretical explanation for the observed gains. All codes of this paper are available at https://github.com/CSU-ModelCompression/BAQ.
Abstract:With the advent of 6G communications, intelligent communication systems face multiple challenges, including constrained perception and response capabilities, limited scalability, and low adaptability in dynamic environments. This tutorial provides a systematic introduction to the principles, design, and applications of Large Artificial Intelligence Models (LAMs) and Agentic AI technologies in intelligent communication systems, aiming to offer researchers a comprehensive overview of cutting-edge technologies and practical guidance. First, we outline the background of 6G communications, review the technological evolution from LAMs to Agentic AI, and clarify the tutorial's motivation and main contributions. Subsequently, we present a comprehensive review of the key components required for constructing LAMs. We further categorize LAMs and analyze their applicability, covering Large Language Models (LLMs), Large Vision Models (LVMs), Large Multimodal Models (LMMs), Large Reasoning Models (LRMs), and lightweight LAMs. Next, we propose a LAM-centric design paradigm tailored for communications, encompassing dataset construction and both internal and external learning approaches. Building upon this, we develop an LAM-based Agentic AI system for intelligent communications, clarifying its core components such as planners, knowledge bases, tools, and memory modules, as well as its interaction mechanisms. We also introduce a multi-agent framework with data retrieval, collaborative planning, and reflective evaluation for 6G. Subsequently, we provide a detailed overview of the applications of LAMs and Agentic AI in communication scenarios. Finally, we summarize the research challenges and future directions in current studies, aiming to support the development of efficient, secure, and sustainable next-generation intelligent communication systems.
Abstract:Wireless Technology Recognition (WTR) is essential in modern communication systems, enabling efficient spectrum management and the seamless coexistence of diverse technologies. In real-world conditions, WTR solutions should be able to handle signals from various resources with different sampling rates, capturing devices, and frequency bands. However, traditional WTR methods, which rely on energy detection, Convolutional Neural Network (CNN) models, or Deep Learning (DL), lack the robustness and adaptability required to generalize across unseen environments, different sampling devices, and previously unencountered signal classes. In this work, we introduce a Transformer-based foundation model for WTR, trained in an unsupervised manner on large-scale, unlabeled wireless signal datasets. Foundation models are designed to learn general-purpose representations that transfer effectively across tasks and domains, allowing generalization towards new technologies and WTR sampling devices. Our approach leverages input patching for computational efficiency and incorporates a two-stage training pipeline: unsupervised pre-training followed by lightweight fine-tuning. This enables the model to generalize to new wireless technologies and environments using only a small number of labeled samples. Experimental results demonstrate that our model achieves superior accuracy across varying sampling rates and frequency bands while maintaining low computational complexity, supporting the vision of a reusable wireless foundation model adaptable to new technologies with minimal retraining.
Abstract:Large language models and autonomous AI agents have evolved rapidly, resulting in a diverse array of evaluation benchmarks, frameworks, and collaboration protocols. However, the landscape remains fragmented and lacks a unified taxonomy or comprehensive survey. Therefore, we present a side-by-side comparison of benchmarks developed between 2019 and 2025 that evaluate these models and agents across multiple domains. In addition, we propose a taxonomy of approximately 60 benchmarks that cover general and academic knowledge reasoning, mathematical problem-solving, code generation and software engineering, factual grounding and retrieval, domain-specific evaluations, multimodal and embodied tasks, task orchestration, and interactive assessments. Furthermore, we review AI-agent frameworks introduced between 2023 and 2025 that integrate large language models with modular toolkits to enable autonomous decision-making and multi-step reasoning. Moreover, we present real-world applications of autonomous AI agents in materials science, biomedical research, academic ideation, software engineering, synthetic data generation, chemical reasoning, mathematical problem-solving, geographic information systems, multimedia, healthcare, and finance. We then survey key agent-to-agent collaboration protocols, namely the Agent Communication Protocol (ACP), the Model Context Protocol (MCP), and the Agent-to-Agent Protocol (A2A). Finally, we discuss recommendations for future research, focusing on advanced reasoning strategies, failure modes in multi-agent LLM systems, automated scientific discovery, dynamic tool integration via reinforcement learning, integrated search capabilities, and security vulnerabilities in agent protocols.
Abstract:Reconfigurable intelligent surface (RIS) is emerging as a promising technology for next-generation wireless communication networks, offering a variety of merits such as the ability to tailor the communication environment. Moreover, deploying multiple RISs helps mitigate severe signal blocking between the base station (BS) and users, providing a practical and efficient solution to enhance the service coverage. However, fully reaping the potential of a multi-RIS aided communication system requires solving a non-convex optimization problem. This challenge motivates the adoption of learning-based methods for determining the optimal policy. In this paper, we introduce a novel heterogeneous graph neural network (GNN) to effectively leverage the graph topology of a wireless communication environment. Specifically, we design an association scheme that selects a suitable RIS for each user. Then, we maximize the weighted sum rate (WSR) of all the users by iteratively optimizing the RIS association scheme, and beamforming designs until the considered heterogeneous GNN converges. Based on the proposed approach, each user is associated with the best RIS, which is shown to significantly improve the system capacity in multi-RIS multi-user millimeter wave (mmWave) communications. Specifically, simulation results demonstrate that the proposed heterogeneous GNN closely approaches the performance of the high-complexity alternating optimization (AO) algorithm in the considered multi-RIS aided communication system, and it outperforms other benchmark schemes. Moreover, the performance improvement achieved through the RIS association scheme is shown to be of the order of 30%.
Abstract:Holographic multiple-input multiple-output (MIMO) is envisioned as one of the most promising technology enablers for future sixth-generation (6G) networks. The use of electrically large holographic surface (HoloS) antennas has the potential to significantly boost the spatial multiplexing gain by increasing the number of degrees of freedom (DoF), even in line-of-sight (LoS) channels. In this context, the research community has shown a growing interest in characterizing the fundamental limits of this technology. In this paper, we compare the two analytical methods commonly utilized in the literature for this purpose: the cut-set integral and the self-adjoint operator. We provide a detailed description of both methods and discuss their advantages and limitations.
Abstract:This paper introduces a novel framework for designing efficient neural network architectures specifically tailored to tiny machine learning (TinyML) platforms. By leveraging large language models (LLMs) for neural architecture search (NAS), a vision transformer (ViT)-based knowledge distillation (KD) strategy, and an explainability module, the approach strikes an optimal balance between accuracy, computational efficiency, and memory usage. The LLM-guided search explores a hierarchical search space, refining candidate architectures through Pareto optimization based on accuracy, multiply-accumulate operations (MACs), and memory metrics. The best-performing architectures are further fine-tuned using logits-based KD with a pre-trained ViT-B/16 model, which enhances generalization without increasing model size. Evaluated on the CIFAR-100 dataset and deployed on an STM32H7 microcontroller (MCU), the three proposed models, LMaNet-Elite, LMaNet-Core, and QwNet-Core, achieve accuracy scores of 74.50%, 74.20% and 73.00%, respectively. All three models surpass current state-of-the-art (SOTA) models, such as MCUNet-in3/in4 (69.62% / 72.86%) and XiNet (72.27%), while maintaining a low computational cost of less than 100 million MACs and adhering to the stringent 320 KB static random-access memory (SRAM) constraint. These results demonstrate the efficiency and performance of the proposed framework for TinyML platforms, underscoring the potential of combining LLM-driven search, Pareto optimization, KD, and explainability to develop accurate, efficient, and interpretable models. This approach opens new possibilities in NAS, enabling the design of efficient architectures specifically suited for TinyML.
Abstract:Integrated sensing and communication (ISAC) in the terahertz (THz) band enables obstacle detection, which in turn facilitates efficient beam management to mitigate THz signal blockage. Simultaneously, a THz radio map, which captures signal propagation characteristics through the distribution of received signal strength (RSS), is well-suited for sensing, as it inherently contains obstacle-related information and reflects the unique properties of the THz channel. This means that communication-assisted sensing in ISAC can be effectively achieved using a THz radio map. However, constructing a radio map presents significant challenges due to the sparse deployment of THz sensors and their limited ability to accurately measure the RSS distribution, which directly affects obstacle sensing. In this paper, we formulate an integrated problem for the first time, leveraging the mutual enhancement between sensed obstacles and the constructed THz radio maps. To address this challenge while improving generalization, we propose an integration framework based on a conditional generative adversarial network (CGAN), which uncovers the manifold structure of THz radio maps embedded with obstacle information. Furthermore, recognizing the shared environmental semantics across THz radio maps from different beam directions, we introduce a novel voting-based sensing scheme, where obstacles are detected by aggregating votes from THz radio maps generated by the CGAN. Simulation results demonstrate that the proposed framework outperforms non-integrated baselines in both radio map construction and obstacle sensing, achieving up to 44.3% and 90.6% reductions in mean squared error (MSE), respectively, in a real-world scenario. These results validate the effectiveness of the proposed voting-based scheme.
Abstract:This work develops a physically consistent model for stacked intelligent metasurfaces (SIM) using multiport network theory and transfer scattering parameters (T-parameters). Unlike the scattering parameters (S-parameters) model, which is highly complex and non-tractable due to its nested nature and excessive number of matrix inversions, the developed T-parameters model is less complex and more tractable due to its explicit and compact nature. This work further derives the constraints of T-parameters for a lossless reciprocal reconfigurable intelligent surfaces (RISs). A gradient descent algorithm (GDA) is proposed to maximize the sum rate in SIM-aided multiuser scenarios, and the results show that accounting for mutual coupling and feedback between consecutive layers can improve the sum rate. In addition, increasing the number of SIM layers with a fixed total number of elements degrades the sum rate when our exact and simplified channel models are used, unlike the simplified channel model with the Rayleigh-Sommerfeld diffraction coefficients which improves the sum rate.
Abstract:Despite the widespread adoption of vision sensors in edge applications, such as surveillance, the transmission of video data consumes substantial spectrum resources. Semantic communication (SC) offers a solution by extracting and compressing information at the semantic level, preserving the accuracy and relevance of transmitted data while significantly reducing the volume of transmitted information. However, traditional SC methods face inefficiencies due to the repeated transmission of static frames in edge videos, exacerbated by the absence of sensing capabilities, which results in spectrum inefficiency. To address this challenge, we propose a SC with computer vision sensing (SCCVS) framework for edge video transmission. The framework first introduces a compression ratio (CR) adaptive SC (CRSC) model, capable of adjusting CR based on whether the frames are static or dynamic, effectively conserving spectrum resources. Additionally, we implement an object detection and semantic segmentation models-enabled sensing (OSMS) scheme, which intelligently senses the changes in the scene and assesses the significance of each frame through in-context analysis. Hence, The OSMS scheme provides CR prompts to the CRSC model based on real-time sensing results. Moreover, both CRSC and OSMS are designed as lightweight models, ensuring compatibility with resource-constrained sensors commonly used in practical edge applications. Experimental simulations validate the effectiveness of the proposed SCCVS framework, demonstrating its ability to enhance transmission efficiency without sacrificing critical semantic information.