Recent years pretrained language models (PLMs) hit a success on several downstream tasks, showing their power on modeling language. To better understand and leverage what PLMs have learned, several techniques have emerged to explore syntactic structures entailed by PLMs. However, few efforts have been made to explore grounding capabilities of PLMs, which are also essential. In this paper, we highlight the ability of PLMs to discover which token should be grounded to which concept, if combined with our proposed erasing-then-awakening approach. Empirical studies on four datasets demonstrate that our approach can awaken latent grounding which is understandable to human experts, even if it is not exposed to such labels during training. More importantly, our approach shows great potential to benefit downstream semantic parsing models. Taking text-to-SQL as a case study, we successfully couple our approach with two off-the-shelf parsers, obtaining an absolute improvement of up to 9.8%.
We present a method for creating 3D indoor scenes with a generative model learned from a collection of semantic-segmented depth images captured from different unknown scenes. Given a room with a specified size, our method automatically generates 3D objects in a room from a randomly sampled latent code. Different from existing methods that represent an indoor scene with the type, location, and other properties of objects in the room and learn the scene layout from a collection of complete 3D indoor scenes, our method models each indoor scene as a 3D semantic scene volume and learns a volumetric generative adversarial network (GAN) from a collection of 2.5D partial observations of 3D scenes. To this end, we apply a differentiable projection layer to project the generated 3D semantic scene volumes into semantic-segmented depth images and design a new multiple-view discriminator for learning the complete 3D scene volume from 2.5D semantic-segmented depth images. Compared to existing methods, our method not only efficiently reduces the workload of modeling and acquiring 3D scenes for training, but also produces better object shapes and their detailed layouts in the scene. We evaluate our method with different indoor scene datasets and demonstrate the advantages of our method. We also extend our method for generating 3D indoor scenes from semantic-segmented depth images inferred from RGB images of real scenes.
Simultaneous wireless information and power transfer (SWIPT) has been envisioned as an enabling technology for future 6G by providing high-efficiency power transfer and high-rate data transmissions concurrently. In this paper, we propose a resonant beam charging and communication (RBCC) system utilizing the telescope internal modulator (TIM) and the semiconductor gain medium. TIM can concentrate the diverged beam into a small-size gain module, thus the propagation loss is reduced and the transmission efficiency is enhanced. Since the semiconductor gain medium has better energy absorption capacity compared with the traditional solid-state one, the overall energy conversion efficiency can be improved. We establish an analytical model of this RBCC system for SWIPT and evaluate its stability, output energy, and spectral efficiency. Numerical analysis shows that the proposed RBCC system can realize stable SWIPT over 10 meters, whose energy conversion efficiency is increased by 14 times compared with the traditional system using the solid-state gain medium without TIM, and the spectrum efficiency can be above 15 bit/s/Hz.
To prevent the spread of coronavirus disease 2019 (COVID-19), preliminary temperature measurement and mask detection in public areas are conducted. However, the existing temperature measurement methods face the problems of safety and deployment. In this paper, to realize safe and accurate temperature measurement even when a person's face is partially obscured, we propose a cloud-edge-terminal collaborative system with a lightweight infrared temperature measurement model. A binocular camera with an RGB lens and a thermal lens is utilized to simultaneously capture image pairs. Then, a mobile detection model based on a multi-task cascaded convolutional network (MTCNN) is proposed to realize face alignment and mask detection on the RGB images. For accurate temperature measurement, we transform the facial landmarks on the RGB images to the thermal images by an affine transformation and select a more accurate temperature measurement area on the forehead. The collected information is uploaded to the cloud in real time for COVID-19 prevention. Experiments show that the detection model is only 6.1M and the average detection speed is 257ms. At a distance of 1m, the error of indoor temperature measurement is about 3%. That is, the proposed system can realize real-time temperature measurement in public areas.
Optical wireless communication (OWC) meets the demands of the future six-generation mobile network (6G) as it operates at several hundreds of Terahertz and has the potential to enable data rate in the order of Tbps. However, most beam-steering OWC technologies require high-accuracy positioning and high-speed control. Resonant beam communication (RBCom), as one kind of non-positioning OWC technologies, has been proposed for high-rate mobile communications. The mobility of RBCom relies on its self-alignment characteristic where no positioning is required. In a previous study, an external-cavity second-harmonic-generation (SHG) RBCom system has been proposed for eliminating the echo interference inside the resonator. However, its energy conversion efficiency and complexity are of concern. In this paper, we propose an intra-cavity SHG RBCom system to simplify the system design and improve the energy conversion efficiency. We elaborate the system structure and establish an analytical model. Numerical results show that the energy consumption of the proposed intra-cavity design is reduced to reach the same level of channel capacity at the receiver compared with the external-cavity one.
Learning-based 3D shape segmentation is usually formulated as a semantic labeling problem, assuming that all parts of training shapes are annotated with a given set of tags. This assumption, however, is impractical for learning fine-grained segmentation. Although most off-the-shelf CAD models are, by construction, composed of fine-grained parts, they usually miss semantic tags and labeling those fine-grained parts is extremely tedious. We approach the problem with deep clustering, where the key idea is to learn part priors from a shape dataset with fine-grained segmentation but no part labels. Given point sampled 3D shapes, we model the clustering priors of points with a similarity matrix and achieve part segmentation through minimizing a novel low rank loss. To handle highly densely sampled point sets, we adopt a divide-and-conquer strategy. We partition the large point set into a number of blocks. Each block is segmented using a deep-clustering-based part prior network trained in a category-agnostic manner. We then train a graph convolution network to merge the segments of all blocks to form the final segmentation result. Our method is evaluated with a challenging benchmark of fine-grained segmentation, showing state-of-the-art performance.
We present a novel approach to robustly detect and perceive vehicles in different camera views as part of a cooperative vehicle-infrastructure system (CVIS). Our formulation is designed for arbitrary camera views and makes no assumptions about intrinsic or extrinsic parameters. First, to deal with multi-view data scarcity, we propose a part-assisted novel view synthesis algorithm for data augmentation. We train a part-based texture inpainting network in a self-supervised manner. Then we render the textured model into the background image with the target 6-DoF pose. Second, to handle various camera parameters, we present a new method that produces dense mappings between image pixels and 3D points to perform robust 2D/3D vehicle parsing. Third, we build the first CVIS dataset for benchmarking, which annotates more than 1540 images (14017 instances) from real-world traffic scenarios. We combine these novel algorithms and datasets to develop a robust approach for 2D/3D vehicle parsing for CVIS. In practice, our approach outperforms SOTA methods on 2D detection, instance segmentation, and 6-DoF pose estimation, by 4.5%, 4.3%, and 2.9%, respectively. More details and results are included in the supplement. To facilitate future research, we will release the source code and the dataset on GitHub.
This paper proposes a novel scheme for mitigating strong interferences, which is applicable to various wireless scenarios, including full-duplex wireless communications and uncoordinated heterogenous networks. As strong interferences can saturate the receiver's analog-to-digital converters (ADC), they need to be mitigated both before and after the ADCs, i.e., via hybrid processing. The key idea of the proposed scheme, namely the Hybrid Interference Mitigation using Analog Prewhitening (HIMAP), is to insert an M-input M-output analog phase shifter network (PSN) between the receive antennas and the ADCs to spatially prewhiten the interferences, which requires no signal information but only an estimate of the covariance matrix. After interference mitigation by the PSN prewhitener, the preamble can be synchronized, the signal channel response can be estimated, and thus a minimum mean squared error (MMSE) beamformer can be applied in the digital domain to further mitigate the residual interferences. The simulation results verify that the HIMAP scheme can suppress interferences 80dB stronger than the signal by using off-the-shelf phase shifters (PS) of 6-bit resolution.
Holistically understanding an object and its 3D movable parts through visual perception models is essential for enabling an autonomous agent to interact with the world. For autonomous driving, the dynamics and states of vehicle parts such as doors, the trunk, and the bonnet can provide meaningful semantic information and interaction states, which are essential to ensuring the safety of the self-driving vehicle. Existing visual perception models mainly focus on coarse parsing such as object bounding box detection or pose estimation and rarely tackle these situations. In this paper, we address this important autonomous driving problem by solving three critical issues. First, to deal with data scarcity, we propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images before reconstructing human-vehicle interaction (VHI) scenarios. Our approach is fully automatic without any human interaction, which can generate a large number of vehicles in uncommon states (VUS) for training deep neural networks (DNNs). Second, to perform fine-grained vehicle perception, we present a multi-task network for VUS parsing and a multi-stream network for VHI parsing. Third, to quantitatively evaluate the effectiveness of our data augmentation approach, we build the first VUS dataset in real traffic scenarios (e.g., getting on/out or placing/removing luggage). Experimental results show that our approach advances other baseline methods in 2D detection and instance segmentation by a big margin (over 8%). In addition, our network yields large improvements in discovering and understanding these uncommon cases. Moreover, we have released the source code, the dataset, and the trained model on Github (https://github.com/zongdai/EditingForDNN).