Abstract:Point-cloud semantic segmentation underpins a wide range of critical applications. Although recent deep architectures and large-scale datasets have driven impressive closed-set performance, these models struggle to recognize or properly segment objects outside their training classes. This gap has sparked interest in Open-Set Semantic Segmentation (O3S), where models must both correctly label known categories and detect novel, unseen classes. In this paper, we propose a plug and play framework for O3S. By modeling the segmentation pipeline as a conditional Markov chain, we derive a novel regularizer term dubbed Conditional Channel Capacity Maximization (3CM), that maximizes the mutual information between features and predictions conditioned on each class. When incorporated into standard loss functions, 3CM encourages the encoder to retain richer, label-dependent features, thereby enhancing the network's ability to distinguish and segment previously unseen categories. Experimental results demonstrate effectiveness of proposed method on detecting unseen objects. We further outline future directions for dynamic open-world adaptation and efficient information-theoretic estimation.
Abstract:Federated learning in satellites offers several advantages. Firstly, it ensures data privacy and security, as sensitive data remains on the satellites and is not transmitted to a central location. This is particularly important when dealing with sensitive or classified information. Secondly, federated learning allows satellites to collectively learn from a diverse set of data sources, benefiting from the distributed knowledge across the satellite network. Lastly, the use of federated learning reduces the communication bandwidth requirements between satellites and the central server, as only model updates are exchanged instead of raw data. By leveraging federated learning, satellites can collaborate and continuously improve their machine learning models while preserving data privacy and minimizing communication overhead. This enables the development of more intelligent and efficient satellite systems for various applications, such as Earth observation, weather forecasting, and space exploration.
Abstract:Federated learning (FL) is a privacy-preserving distributed machine learning paradigm that operates at the wireless edge. It enables clients to collaborate on model training while keeping their data private from adversaries and the central server. However, current FL approaches have limitations. Some rely on secure multiparty computation, which can be vulnerable to inference attacks. Others employ differential privacy, but this may lead to decreased test accuracy when dealing with a large number of parties contributing small amounts of data. To address these issues, this paper proposes a novel approach that integrates federated learning seamlessly into the inner workings of MIMO (Multiple-Input Multiple-Output) systems.
Abstract:Federated learning (FL) is a type of distributed machine learning at the wireless edge that preserves the privacy of clients' data from adversaries and even the central server. Existing federated learning approaches either use (i) secure multiparty computation (SMC) which is vulnerable to inference or (ii) differential privacy which may decrease the test accuracy given a large number of parties with relatively small amounts of data each. To tackle the problem with the existing methods in the literature, In this paper, we introduce incorporate federated learning in the inner-working of MIMO systems.
Abstract:In a multi-input multi-output (MIMO) setup, where one side of the link comprises a linear antenna array, data can be transmitted over the direction of incident rays. Channel capacity for this setup is studied in this paper. We define two different setups; one when the energy is constant and equal over all rays, and one when available energy is evenly distributed over rays. For the latter, we show that there is an upper bound for channel capacity, regardless of the number of rays and antennas. Also, we have compared this setup with the legacy single-input single-output (SISO) AWGN channel.
Abstract:Federated learning (FL) is a type of distributed machine learning at the wireless edge that preserves the privacy of clients' data from adversaries and even the central server. Existing federated learning approaches either use (i) secure multiparty computation (SMC) which is vulnerable to inference or (ii) differential privacy which may decrease the test accuracy given a large number of parties with relatively small amounts of data each. To tackle the problem with the existing methods in the literature, In this paper, we introduce PHY-Fed, a new framework that secures federated algorithms from an information-theoretic point of view.