The Information Bottleneck (IB) method is an information theoretical framework to design a parsimonious and tunable feature-extraction mechanism, such that the extracted features are maximally relevant to a specific learning or inference task. Despite its theoretical value, the IB is based on a functional optimization problem that admits a closed form solution only on specific cases (e.g., Gaussian distributions), making it difficult to be applied in most applications, where it is necessary to resort to complex and approximated variational implementations. To overcome this limitation, we propose an approach to adapt the closed-form solution of the Gaussian IB to a general task. Whichever is the inference task to be performed by a (possibly deep) neural-network, the key idea is to opportunistically design a regression sub-task, embedded in the original problem, where we can safely assume a (joint) multivariate normality between the sub-task's inputs and outputs. In this way we can exploit a fixed and pre-trained neural network to process the input data, using a tunable number of features, to trade data-size and complexity for accuracy. This approach is particularly useful every time a device needs to transmit data (or features) to a server that has to fulfil an inference task, as it provides a principled way to extract the most relevant features for the task to be executed, while looking for the best trade-off between the size of the feature vector to be transmitted, inference accuracy, and complexity. Extensive simulation results testify the effectiveness of the proposed method and encourage to further investigate this research line.
In future 6G wireless networks, semantic and effectiveness aspects of communications will play a fundamental role, incorporating meaning and relevance into transmissions. However, obstacles arise when devices employ diverse languages, logic, or internal representations, leading to semantic mismatches that might jeopardize understanding. In latent space communication, this challenge manifests as misalignment within high-dimensional representations where deep neural networks encode data. This paper presents a novel framework for goal-oriented semantic communication, leveraging relative representations to mitigate semantic mismatches via latent space alignment. We propose a dynamic optimization strategy that adapts relative representations, communication parameters, and computation resources for energy-efficient, low-latency, goal-oriented semantic communications. Numerical results demonstrate our methodology's effectiveness in mitigating mismatches among devices, while optimizing energy consumption, delay, and effectiveness.
Topological deep learning (TDL) is a rapidly evolving field that uses topological features to understand and design deep learning models. This paper posits that TDL may complement graph representation learning and geometric deep learning by incorporating topological concepts, and can thus provide a natural choice for various machine learning settings. To this end, this paper discusses open problems in TDL, ranging from practical benefits to theoretical foundations. For each problem, it outlines potential solutions and future research opportunities. At the same time, this paper serves as an invitation to the scientific community to actively participate in TDL research to unlock the potential of this emerging field.
Recent advances in AI technologies have notably expanded device intelligence, fostering federation and cooperation among distributed AI agents. These advancements impose new requirements on future 6G mobile network architectures. To meet these demands, it is essential to transcend classical boundaries and integrate communication, computation, control, and intelligence. This paper presents the 6G-GOALS approach to goal-oriented and semantic communications for AI-Native 6G Networks. The proposed approach incorporates semantic, pragmatic, and goal-oriented communication into AI-native technologies, aiming to facilitate information exchange between intelligent agents in a more relevant, effective, and timely manner, thereby optimizing bandwidth, latency, energy, and electromagnetic field (EMF) radiation. The focus is on distilling data to its most relevant form and terse representation, aligning with the source's intent or the destination's objectives and context, or serving a specific goal. 6G-GOALS builds on three fundamental pillars: i) AI-enhanced semantic data representation, sensing, compression, and communication, ii) foundational AI reasoning and causal semantic data representation, contextual relevance, and value for goal-oriented effectiveness, and iii) sustainability enabled by more efficient wireless services. Finally, we illustrate two proof-of-concepts implementing semantic, goal-oriented, and pragmatic communication principles in near-future use cases. Our study covers the project's vision, methodologies, and potential impact.
This paper investigates the role and the impact of control operations for dynamic mobile edge computing (MEC) empowered by Reconfigurable Intelligent Surfaces (RISs), in which multiple devices offload their computation tasks to an access point (AP) equipped with an edge server (ES), with the help of the RIS. While usually ignored, the control aspects related to channel estimation (CE), resource allocation (RA), and control signaling play a fundamental role in the user-perceived delay and energy consumption. In general, the higher the resources involved in the control operations, the higher their reliability; however, this introduces an overhead, which reduces the number of resources available for computation offloading, possibly increasing the overall latency experienced. Conversely, a lower control overhead translates to more resources available for computation offloading but impacts the CE accuracy and RA flexibility. This paper establishes a basic framework for integrating the impact of control operations in the performance evaluation of the RIS-aided MEC paradigm, clarifying their trade-offs through theoretical analysis and numerical simulations.
This work lies at the intersection of two cutting edge technologies envisioned to proliferate in future 6G wireless systems: Multi-access Edge Computing (MEC) and Reconfigurable Intelligent Surfaces (RISs). While the former will bring a powerful information technology environment at the wireless edge, the latter will enhance communication performance, thanks to the possibility of adapting wireless propagation as per end users' convenience, according to specific service requirements. We propose a joint optimization of radio, computing, and wireless environment reconfiguration through an RIS, with the goal of enabling low power computation offloading services with reliability guarantees. Going beyond previous works on this topic, multi-carrier frequency selective RIS elements' responses and wireless channels are considered. This opens new challenges in RIS optimization, accounting for frequency dependent RIS response profiles, which strongly affect RIS-aided wireless links and, as a consequence, MEC service performance. We formulate an optimization problem accounting for short and long-term constraints involving device transmit power allocation across multiple subcarriers and local computing resources, as well as RIS reconfiguration parameters according to a recently developed Lorentzian model. Besides a theoretical optimization framework, numerical results show the effectiveness of the proposed method in enabling low power reliable computation offloading over RIS-aided frequency selective channels.
Deep Neural Network (DNN) splitting is one of the key enablers of edge Artificial Intelligence (AI), as it allows end users to pre-process data and offload part of the computational burden to nearby Edge Cloud Servers (ECSs). This opens new opportunities and degrees of freedom in balancing energy consumption, delay, accuracy, privacy, and other trustworthiness metrics. In this work, we explore the opportunity of DNN splitting at the edge of 6G wireless networks to enable low energy cooperative inference with target delay and accuracy with a goal-oriented perspective. Going beyond the current literature, we explore new trade-offs that take into account the accuracy degradation as a function of the Splitting Point (SP) selection and wireless channel conditions. Then, we propose an algorithm that dynamically controls SP selection, local computing resources, uplink transmit power and bandwidth allocation, in a goal-oriented fashion, to meet a target goal-effectiveness. To the best of our knowledge, this is the first work proposing adaptive SP selection on the basis of all learning performance (i.e., energy, delay, accuracy), with the aim of guaranteeing the accomplishment of a goal (e.g., minimize the energy consumption under latency and accuracy constraints). Numerical results show the advantages of the proposed SP selection and resource allocation, to enable energy frugal and effective edge AI.
Despite the large research effort devoted to learning dependencies between time series, the state of the art still faces a major limitation: existing methods learn partial correlations but fail to discriminate across distinct frequency bands. Motivated by many applications in which this differentiation is pivotal, we overcome this limitation by learning a block-sparse, frequency-dependent, partial correlation graph, in which layers correspond to different frequency bands, and partial correlations can occur over just a few layers. To this aim, we formulate and solve two nonconvex learning problems: the first has a closed-form solution and is suitable when there is prior knowledge about the number of partial correlations; the second hinges on an iterative solution based on successive convex approximation, and is effective for the general case where no prior knowledge is available. Numerical results on synthetic data show that the proposed methods outperform the current state of the art. Finally, the analysis of financial time series confirms that partial correlations exist only within a few frequency bands, underscoring how our methods enable the gaining of valuable insights that would be undetected without discriminating along the frequency domain.
Internet of Things (IoT) applications combine sensing, wireless communication, intelligence, and actuation, enabling the interaction among heterogeneous devices that collect and process considerable amounts of data. However, the effectiveness of IoT applications needs to face the limitation of available resources, including spectrum, energy, computing, learning and inference capabilities. This paper challenges the prevailing approach to IoT communication, which prioritizes the usage of resources in order to guarantee perfect recovery, at the bit level, of the data transmitted by the sensors to the central unit. We propose a novel approach, called goal-oriented (GO) IoT system design, that transcends traditional bit-related metrics and focuses directly on the fulfillment of the goal motivating the exchange of data. The improvement is then achieved through a comprehensive system optimization, integrating sensing, communication, computation, learning, and control. We provide numerical results demonstrating the practical applications of our methodology in compelling use cases such as edge inference, cooperative sensing, and federated learning. These examples highlight the effectiveness and real-world implications of our proposed approach, with the potential to revolutionize IoT systems.