Abstract:Collaborative training methods like Federated Learning (FL) and Split Learning (SL) enable distributed machine learning without sharing raw data. However, FL assumes clients can train entire models, which is infeasible for large-scale models. In contrast, while SL alleviates the client memory constraint in FL by offloading most training to the server, it increases network latency due to its sequential nature. Other methods address the conundrum by using local loss functions for parallel client-side training to improve efficiency, but they lack server feedback and potentially suffer poor accuracy. We propose FSL-SAGE (Federated Split Learning via Smashed Activation Gradient Estimation), a new federated split learning algorithm that estimates server-side gradient feedback via auxiliary models. These auxiliary models periodically adapt to emulate server behavior on local datasets. We show that FSL-SAGE achieves a convergence rate of $\mathcal{O}(1/\sqrt{T})$, where $T$ is the number of communication rounds. This result matches FedAvg, while significantly reducing communication costs and client memory requirements. Our empirical results also verify that it outperforms existing state-of-the-art FSL methods, offering both communication efficiency and accuracy.
Abstract:In the context of communication-centric integrated sensing and communication (ISAC), the orthogonal frequency division multiplexing (OFDM) waveform was proven to be optimal in minimizing ranging sidelobes when random signaling is used. A typical assumption in OFDM-based ranging is that the max target delay is less than the cyclic prefix (CP) length, which is equivalent to performing a \textit{periodic} correlation between the signal reflected from the target and the transmitted signal. In the multi-user case, such as in Orthogonal Frequency Division Multiple Access (OFDMA), users are assigned disjoint subsets of subcarriers which eliminates mutual interference between the communication channels of the different users. However, ranging involves an aperiodic correlation operation for target ranges with delays greater than the CP length. Aperiodic correlation between signals from disjoint frequency bands will not be zero, resulting in mutual interference between different user bands. We refer to this as \textit{inter-band} (IB) cross-correlation interference. In this work, we analytically characterize IB interference and quantify its impact on the integrated sidelobe levels (ISL). We introduce an orthogonal spreading layer on top of OFDM that can reduce IB interference resulting in ISL levels significantly lower than for OFDM without spreading in the multi-user setup. We validate our claims through simulations, and using an upper bound on IB energy which we show that it can be minimized using our proposed spreading. However, for orthogonal spreading to be effective, a price must be paid in terms of spectral utilization, which is yet another manifestation of the trade-off between sensing accuracy and data communication capacity
Abstract:In the physical layer (PHY) of modern cellular systems, information is transmitted as a sequence of resource blocks (RBs) across various domains with each resource block limited to a certain time and frequency duration. In the PHY of 4G/5G systems, data is transmitted in the unit of transport block (TB) across a fixed number of physical RBs based on resource allocation decisions. Using sharp band-limiting in the frequency domain can provide good separation between different resource allocations without wasting resources in guard bands. However, using sharp filters comes at the cost of elongating the overall system impulse response which can accentuate inter-symbol interference (ISI). In a multi-user setup, such as in Machine Type Communication (MTC), different users are allocated resources across time and frequency, and operate at different power levels. If strict band-limiting separation is used, high power user signals can leak in time into low power user allocations. The ISI extent, i.e., the number of neighboring symbols that contribute to the interference, depends both on the channel delay spread and the spectral concentration properties of the signaling waveforms. We hypothesize that using a precoder that effectively transforms an OFDM waveform basis into a basis comprised of prolate spheroidal sequences (DPSS) can minimize the ISI extent when strictly confined frequency allocations are used. Analytical expressions for upper bounds on ISI are derived. In addition, simulation results support our hypothesis.
Abstract:In the physical layer (PHY) of modern cellular systems, information is transmitted as a sequence of resource blocks (RBs) across various domains with each resource block limited to a certain time and frequency duration. In the PHY of 4G/5G systems, data is transmitted in the unit of transport block (TB) across a fixed number of physical RBs based on resource allocation decisions. This simultaneous time and frequency localized structure of resource allocation is at odds with the perennial time-frequency compactness limits. Specifically, the band-limiting operation will disrupt the time localization and lead to inter-block interference (IBI). The IBI extent, i.e., the number of neighboring blocks that contribute to the interference, depends mainly on the spectral concentration properties of the signaling waveforms. Deviating from the standard Gabor-frame based multi-carrier approaches which use time-frequency shifted versions of a single prototype pulse, the use of a set of multiple mutually orthogonal pulse shapes-that are not related by a time-frequency shift relationship-is proposed. We hypothesize that using discrete prolate spheroidal sequences (DPSS) as the set of waveform pulse shapes reduces IBI. Analytical expressions for upper bounds on IBI are derived as well as simulation results provided that support our hypothesis.