The next-generation wireless technologies, commonly referred to as the sixth generation (6G), are envisioned to support extreme communications capacity and in particular disruption in the network sensing capabilities. The terahertz (THz) band is one potential enabler for those due to the enormous unused frequency bands and the high spatial resolution enabled by both short wavelengths and bandwidths. Different from earlier surveys, this paper presents a comprehensive treatment and technology survey on THz communications and sensing in terms of the advantages, applications, propagation characterization, channel modeling, measurement campaigns, antennas, transceiver devices, beamforming, networking, the integration of communications and sensing, and experimental testbeds. Starting from the motivation and use cases, we survey the development and historical perspective of THz communications and sensing with the anticipated 6G requirements. We explore the radio propagation, channel modeling, and measurements for THz band. The transceiver requirements, architectures, technological challenges, and approaches together with means to compensate for the high propagation losses by appropriate antenna and beamforming solutions. We survey also several system technologies required by or beneficial for THz systems. The synergistic design of sensing and communications is explored with depth. Practical trials, demonstrations, and experiments are also summarized. The paper gives a holistic view of the current state of the art and highlights the issues and challenges that are open for further research towards 6G.
A wireless federated learning system is investigated by allowing a server and workers to exchange uncoded information via orthogonal wireless channels. Since the workers frequently upload local gradients to the server via bandwidth-limited channels, the uplink transmission from the workers to the server becomes a communication bottleneck. Therefore, a one-shot distributed principle component analysis (PCA) is leveraged to reduce the dimension of uploaded gradients such that the communication bottleneck is relieved. A PCA-based wireless federated learning (PCA-WFL) algorithm and its accelerated version (i.e., PCA-AWFL) are proposed based on the low-dimensional gradients and the Nesterov's momentum. For the non-convex loss functions, a finite-time analysis is performed to quantify the impacts of system hyper-parameters on the convergence of the PCA-WFL and PCA-AWFL algorithms. The PCA-AWFL algorithm is theoretically certified to converge faster than the PCA-WFL algorithm. Besides, the convergence rates of PCA-WFL and PCA-AWFL algorithms quantitatively reveal the linear speedup with respect to the number of workers over the vanilla gradient descent algorithm. Numerical results are used to demonstrate the improved convergence rates of the proposed PCA-WFL and PCA-AWFL algorithms over the benchmarks.
Semantic communication is a new paradigm that exploits deep learning models to enable end-to-end communications processes, and recent studies have shown that it can achieve better noise resiliency compared with traditional communication schemes in a low signal-to-noise (SNR) regime. To achieve multiple access in semantic communication, we propose a deep learning-based multiple access (DeepMA) method by training semantic communication models with the abilities of joint source-channel coding (JSCC) and orthogonal signal modulation. DeepMA is achieved by a DeepMA network (DMANet), which is comprised of several independent encoder-decoder pairs (EDPs), and the DeepMA encoders can encode the input data as mutually orthogonal semantic symbol vectors (SSVs) such that the DeepMA decoders can recover their own target data from a received mixed SSV (MSSV) superposed by multiple SSV components transmitted from different encoders. We describe frameworks of DeepMA in wireless device-to-device (D2D), downlink, and uplink channel multiplexing scenarios, along with the training algorithm. We evaluate the performance of the proposed DeepMA in wireless image transmission tasks and compare its performance with the attention module-based deep JSCC (ADJSCC) method and conventional communication schemes using better portable graphics (BPG) and Low-density parity-check code (LDPC). The results obtained show that the proposed DeepMA can achieve effective, flexible, and privacy-preserving channel multiplexing process, and demonstrate that our proposed DeepMA approach can yield comparable bandwidth efficiency compared with conventional multiple access schemes.
In this letter, an orthogonal time frequency space (OTFS) based non-orthogonal multiple access (NOMA) scheme is investigated for the coordinated direct and relay transmission system, where a source directly communicates with a near user with high mobile speed, and it needs the relaying assistance to serve the far user also having high mobility. Due to the coexistence of signal superposition coding and multi-domain transformation, the performance of OTFS-based NOMA is usually challenging to be measured from a theoretical perspective. To accurately evaluate the system performance of the proposed scheme, we derive the closed-form expressions for the outage probability and the outage sum rate by using the Inversion formula and characteristic function. Numerical results verify the performance superiority and the effectiveness of the proposed scheme.
In fifth generation (5G) new radio (NR), the demodulation reference signal (DMRS) is employed for channel estimation as part of coherent demodulation of the physical uplink shared channel. However, DMRS spoofing poses a serious threat to 5G NR since inaccurate channel estimation will severely degrade the decoding performance. In this correspondence, we propose to exploit the spatial sparsity structure of the channel to detect the DMRS spoofing, which is motivated by the fact that the spatial sparsity structure of the channel will be significantly impacted if the DMRS spoofing happens. We first extract the spatial sparsity structure of the channel by solving a sparse feature retrieval problem, then propose a sequential sparsity structure anomaly detection method to detect DMRS spoofing. In simulation experiments, we exploit clustered delay line based channel model from 3GPP standards for verifications. Numerical results show that our method outperforms both the subspace dimension based and energy detector based methods.
With the drive to create a decentralized digital economy, Web 3.0 has become a cornerstone of digital transformation, developed on the basis of computing-force networking, distributed data storage, and blockchain. With the rapid realization of quantum devices, Web 3.0 is being developed in parallel with the deployment of quantum cloud computing and quantum Internet. In this regard, quantum computing first disrupts the original cryptographic systems that protect data security while reshaping modern cryptography with the advantages of quantum computing and communication. Therefore, in this paper, we introduce a quantum blockchain-driven Web 3.0 framework that provides information-theoretic security for decentralized data transferring and payment transactions. First, we present the framework of quantum blockchain-driven Web 3.0 with future-proof security during the transmission of data and transaction information. Next, we discuss the potential applications and challenges of implementing quantum blockchain in Web 3.0. Finally, we describe a use case for quantum non-fungible tokens (NFTs) and propose a quantum deep learning-based optimal auction for NFT trading to maximize the achievable revenue for sufficient liquidity in Web 3.0. In this way, the proposed framework can achieve proven security and sustainability for the next-generation decentralized digital society.
Cell-free (CF) massive multiple-input multiple-output (MIMO) is considered as a promising technology for achieving the ultimate performance limit. However, due to its distributed architecture and low-cost access points (APs), the signals received at user equipments (UEs) are most likely asynchronous. In this paper, we investigate the performance of CF massive MIMO systems with asynchronous reception, including both effects of delay and oscillator phases. Taking into account the imperfect channel state information caused by phase asynchronization and pilot contamination, we obtain novel and closed-form downlink spectral efficiency (SE) expressions with coherent and non-coherent data transmission schemes, respectively. Simulation results show that asynchronous reception destroys the orthogonality of pilots and coherent transmission of data, and thus results in poor system performance. In addition, getting a highly accurate delay phase is substantial for CF massive MIMO systems to achieve coherent transmission gain. Moreover, the oscillator phase of UEs has a larger effect on SE than that of the APs, because the latter can be significantly reduced by increasing the number of antennas.
Due to the rapid growth of data transmissions in internet of vehicles (IoV), finding schemes that can effectively alleviate access congestion has become an important issue. Recently, many traffic control schemes have been studied. Nevertheless, the dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies, which is significant for the random access resource allocation. In this paper, we consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it. Firstly, IoV devices are divided into various classes based on delay characteristics. The target of maximizing the successful transmission of packets with the success rate constraint is established. Then, the optimization objective is transformed into a markov decision process (MDP) model. Finally, the access class barring (ACB) factors are obtained based on the PPO method to maximize the number of successful access devices. The performance of the proposal algorithm in respect of successful events and delay compared to existing schemes is verified by simulations.
Monitoring grid assets continuously is critical in ensuring the reliable operation of the electricity grid system and improving its resilience in case of a defect. In light of several asset monitoring techniques in use, power line communication (PLC) enables a low-cost cable diagnostics solution by re-using smart grid data communication modems to also infer the cable health using the inherently estimated communication channel state information. Traditional PLC-based cable diagnostics solutions are dependent on prior knowledge of the cable type, network topology, and/or characteristics of the anomalies. In contrast, we develop an asset monitoring technique in this paper that can detect various types of anomalies in the grid without any prior domain knowledge. To this end, we design a solution that first uses time-series forecasting to predict the PLC channel state information at any given point in time based on its historical data. Under the assumption that the prediction error follows a Gaussian distribution, we then perform chi-squared statistical test to determine the significance level of the resultant Mahalanobis distance to build our anomaly detector. We demonstrate the effectiveness and universality of our solution via evaluations conducted using both synthetic and real-world data extracted from low- and medium-voltage distribution networks.
With the development of the 5G and Internet of Things, amounts of wireless devices need to share the limited spectrum resources. Dynamic spectrum access (DSA) is a promising paradigm to remedy the problem of inefficient spectrum utilization brought upon by the historical command-and-control approach to spectrum allocation. In this paper, we investigate the distributed DSA problem for multi-user in a typical multi-channel cognitive radio network. The problem is formulated as a decentralized partially observable Markov decision process (Dec-POMDP), and we proposed a centralized off-line training and distributed on-line execution framework based on cooperative multi-agent reinforcement learning (MARL). We employ the deep recurrent Q-network (DRQN) to address the partial observability of the state for each cognitive user. The ultimate goal is to learn a cooperative strategy which maximizes the sum throughput of cognitive radio network in distributed fashion without coordination information exchange between cognitive users. Finally, we validate the proposed algorithm in various settings through extensive experiments. From the simulation results, we can observe that the proposed algorithm can converge fast and achieve almost the optimal performance.