Location information is expected to be the key to meeting the needs of communication and context-aware services in 6G systems. User localization is achieved based on delay and/or angle estimation using uplink or downlink pilot signals. However, hardware impairments (HWIs) distort the signals at both the transmitter and receiver sides and thus affect the localization performance. While this impact can be ignored at lower frequencies where HWIs are less severe, modeling and analysis efforts are needed for 6G to evaluate the localization degradation due to HWIs. In this work, we model various types of impairments and conduct a misspecified Cram\'er-Rao bound analysis to evaluate the HWI-induced performance loss. Simulation results with different types of HWIs show that each HWI leads to a different level of degradation in angle and delay estimation performance.
With the Autonomous Vehicle (AV) industry shifting towards Autonomy 2.0, the performance of self-driving systems starts to rely heavily on large quantities of expert driving demonstrations. However, collecting this demonstration data typically involves expensive HD sensor suites (LiDAR + RADAR + cameras), which quickly becomes financially infeasible at the scales required. This motivates the use of commodity vision sensors for data collection, which are an order of magnitude cheaper than the HD sensor suites, but offer lower fidelity. If it were possible to leverage these for training an AV motion planner, observing the `long tail' of driving events would become a financially viable strategy. As our main contribution we show it is possible to train a high-performance motion planner using commodity vision data which outperforms planners trained on HD-sensor data for a fraction of the cost. We do this by comparing the autonomy system performance when training on these two different sensor configurations, and showing that we can compensate for the lower sensor fidelity by means of increased quantity: a planner trained on 100h of commodity vision data outperforms one with 25h of expensive HD data. We also share the technical challenges we had to tackle to make this work. To the best of our knowledge, we are the first to demonstrate that this is possible using real-world data.
Knowledge Extraction (KE) which aims to extract structural information from unstructured texts often suffers from data scarcity and emerging unseen types, i.e., low-resource scenarios. Many neural approaches on low-resource KE have been widely investigated and achieved impressive performance. In this paper, we present a literature review towards KE in low-resource scenarios, and systematically categorize existing works into three paradigms: (1) exploiting higher-resource data, (2) exploiting stronger models, and (3) exploiting data and models together. In addition, we describe promising applications and outline some potential directions for future research. We hope that our survey can help both the academic and industrial community to better understand this field, inspire more ideas and boost broader applications.
Knowledge graph completion aims to address the problem of extending a KG with missing triples. In this paper, we provide an approach GenKGC, which converts knowledge graph completion to sequence-to-sequence generation task with the pre-trained language model. We further introduce relation-guided demonstration and entity-aware hierarchical decoding for better representation learning and fast inference. Experimental results on three datasets show that our approach can obtain better or comparable performance than baselines and achieve faster inference speed compared with previous methods with pre-trained language models. We also release a new large-scale Chinese knowledge graph dataset AliopenKG500 for research purpose. Code and datasets are available in https://github.com/zjunlp/PromptKGC/tree/main/GenKGC.
Few-shot Learning (FSL) is aimed to make predictions based on a limited number of samples. Structured data such as knowledge graphs and ontology libraries has been leveraged to benefit the few-shot setting in various tasks. However, the priors adopted by the existing methods suffer from challenging knowledge missing, knowledge noise, and knowledge heterogeneity, which hinder the performance for few-shot learning. In this study, we explore knowledge injection for FSL with pre-trained language models and propose ontology-enhanced prompt-tuning (OntoPrompt). Specifically, we develop the ontology transformation based on the external knowledge graph to address the knowledge missing issue, which fulfills and converts structure knowledge to text. We further introduce span-sensitive knowledge injection via a visible matrix to select informative knowledge to handle the knowledge noise issue. To bridge the gap between knowledge and text, we propose a collective training algorithm to optimize representations jointly. We evaluate our proposed OntoPrompt in three tasks, including relation extraction, event extraction, and knowledge graph completion, with eight datasets. Experimental results demonstrate that our approach can obtain better few-shot performance than baselines.
Convolutional Neural Networks (CNNs) demonstrate great performance in various applications but have high computational complexity. Quantization is applied to reduce the latency and storage cost of CNNs. Among the quantization methods, Binary and Ternary Weight Networks (BWNs and TWNs) have a unique advantage over 8-bit and 4-bit quantization. They replace the multiplication operations in CNNs with additions, which are favoured on In-Memory-Computing (IMC) devices. IMC acceleration for BWNs has been widely studied. However, though TWNs have higher accuracy and better sparsity, IMC acceleration for TWNs has limited research. TWNs on existing IMC devices are inefficient because the sparsity is not well utilized, and the addition operation is not efficient. In this paper, we propose FAT as a novel IMC accelerator for TWNs. First, we propose a Sparse Addition Control Unit, which utilizes the sparsity of TWNs to skip the null operations on zero weights. Second, we propose a fast addition scheme based on the memory Sense Amplifier to avoid the time overhead of both carry propagation and writing back the carry to the memory cells. Third, we further propose a Combined-Stationary data mapping to reduce the data movement of both activations and weights and increase the parallelism of memory columns. Simulation results show that for addition operations at the Sense Amplifier level, FAT achieves 2.00X speedup, 1.22X power efficiency and 1.22X area efficiency compared with State-Of-The-Art IMC accelerator ParaPIM. FAT achieves 10.02X speedup and 12.19X energy efficiency compared with ParaPIM on networks with 80% sparsity
Attitude determination is a popular application of Global Navigation Satellite Systems (GNSS). Many methods have been developed to solve the attitude determination problem with different performance offerings. We develop a constrained wrapped least-squares (C-WLS) method for high-accuracy attitude determination. This approach is built on an optimization model that leverages prior information related to the antenna array and the integer nature of the carrier-phase ambiguities in an innovative way. The proposed approach adopts an efficient search strategy to estimate the vehicle's attitude parameters using ambiguous carrier-phase observations directly, without requiring prior carrier-phase ambiguity fixing. The performance of the proposed method is evaluated via simulations and experimentally utilizing data collected using multiple GNSS receivers. The simulation and experimental results demonstrate excellent performance, with the proposed method outperforming the ambiguity function method, the constrained LAMBDA and multivariate constrained LAMBDA methods, three prominent attitude determination algorithms.
Benefit from the promising features of second-order correlation, ghost imaging (GI) has received extensive attentions in recent years. Simultaneously, GI is affected by the poor trade-off between sampling rate and imaging quality. The traditional image reconstruction method in GI is to accumulate the action result of each speckle and the corresponding bucket signal. We found that the image reconstruction process of GI is very similar to the Recurrent Neural Network (RNN), which is one of the deep learning algorithm. In this paper, we proposed a novel method that effectively implements GI on the RNN architecture, called GI-RNN. The state of each layer in RNN is determined by the output of the previous layer and the input of this layer, and the output of the network is the sum of all previous states. Therefore, we take the speckle of each illumination and the corresponding bucket signal as the input of each layer, and the output of the network is the sum of all previous speckle and bucket signal, which is the image of the target. The testing results show that the proposed method can achieve image reconstruction at a very low sampling rate (0.38$\%$). Moreover, we compare GI-RNN with traditional GI algorithm and compressed sensing algorithm. The results of different targets show that GI-RNN is 6.61 dB higher than compressed sensing algorithm and 12.58 dB higher than traditional GI algorithm on average. In our view, the proposed method makes an important step to applications of GI.
Terahertz (THz) communications are celebrated as key enablers for converged localization and sensing in future sixth-generation (6G) wireless communication systems and beyond. Instead of being a byproduct of the communication system, localization in 6G is indispensable for location-aware communications. Towards this end, we aim to identify the prospects, challenges, and requirements of THz localization techniques. We first review the history and trends of localization methods and discuss their objectives, constraints, and applications in contemporary communication systems. We then detail the latest advances in THz communications and introduce the THz-specific channel and system models. Afterward, we formulate THz-band localization as a 3D position/orientation estimation problem, detailing geometry-based localization techniques and describing potential THz localization and sensing extensions. We further formulate the offline design and online optimization of THz localization systems, provide numerical simulation results, and conclude by providing insight into interdisciplinary future research directions. Preliminary results illustrate that under the same total transmission power and time, THz-based localization is ~5 (~20) times more accurate than mmWave-based localization without (with) prior position information.