Abstract:Most existing semantic communication systems employ analog modulation, which is incompatible with modern digital communication systems. Although several digital transmission approaches have been proposed to address this issue, an end-to-end bit-level method that is compatible with arbitrary modulation formats, robust to channel noise, and free from quantization errors remains lacking. To this end, we propose BitSemCom, a novel bit-level semantic communication framework that realizes true joint source-channel coding (JSCC) at the bit level. Specifically, we introduce a modular learnable bit mapper that establishes a probabilistic mapping between continuous semantic features and discrete bits, utilizing the Gumbel-Softmax trick to enable differentiable bit generation. Simulation results on image transmission demonstrate that BitSemCom achieves both competitive performance and superior robustness compared to traditional separate source-channel coding (SSCC) schemes, and outperforms deep learning based JSCC with uniform 1-bit quantization, validating the effectiveness of the learnable bit mapper. Despite these improvements, the bit mapper adds only 0.42% parameters and 0.09% computational complexity, making BitSemCom a lightweight and practical solution for real-world semantic communication.




Abstract:Multimodal semantic communication has gained widespread attention due to its ability to enhance downstream task performance. A key challenge in such systems is the effective fusion of features from different modalities, which requires the extraction of rich and diverse semantic representations from each modality. To this end, we propose ProMSC-MIS, a Prompt-based Multimodal Semantic Communication system for Multi-spectral Image Segmentation. Specifically, we propose a pre-training algorithm where features from one modality serve as prompts for another, guiding unimodal semantic encoders to learn diverse and complementary semantic representations. We further introduce a semantic fusion module that combines cross-attention mechanisms and squeeze-and-excitation (SE) networks to effectively fuse cross-modal features. Simulation results show that ProMSC-MIS significantly outperforms benchmark methods across various channel-source compression levels, while maintaining low computational complexity and storage overhead. Our scheme has great potential for applications such as autonomous driving and nighttime surveillance.