In monocular video 3D multi-person pose estimation, inter-person occlusion and close interactions can cause human detection to be erroneous and human-joints grouping to be unreliable. Existing top-down methods rely on human detection and thus suffer from these problems. Existing bottom-up methods do not use human detection, but they process all persons at once at the same scale, causing them to be sensitive to multiple-persons scale variations. To address these challenges, we propose the integration of top-down and bottom-up approaches to exploit their strengths. Our top-down network estimates human joints from all persons instead of one in an image patch, making it robust to possible erroneous bounding boxes. Our bottom-up network incorporates human-detection based normalized heatmaps, allowing the network to be more robust in handling scale variations. Finally, the estimated 3D poses from the top-down and bottom-up networks are fed into our integration network for final 3D poses. Besides the integration of top-down and bottom-up networks, unlike existing pose discriminators that are designed solely for single person, and consequently cannot assess natural inter-person interactions, we propose a two-person pose discriminator that enforces natural two-person interactions. Lastly, we also apply a semi-supervised method to overcome the 3D ground-truth data scarcity. Our quantitative and qualitative evaluations show the effectiveness of our method compared to the state-of-the-art baselines.
Reconfigurable intelligent surface (RIS) is a promising reflective radio technology for improving the coverage and rate of future wireless systems by reconfiguring the wireless propagation environment. The current work mainly focuses on the physical layer design of RIS. However, enabling multiple devices to communicate with the assistance of RIS is a crucial challenging problem. Motivated by this, we explore RIS-assisted communications at the medium access control (MAC) layer and propose an RIS-assisted MAC framework. In particular, RISassisted transmissions are implemented by pre-negotiation and a multi-dimension reservation (MDR) scheme. Based on this, we investigate RIS-assisted single-channel multi-user (SCMU) communications. Wherein the RIS regarded as a whole unity can be reserved by one user to support the multiple data transmissions, thus achieving high efficient RIS-assisted connections at the user. Moreover, under frequency-selective channels, implementing the MDR scheme on the RIS group division, RISassisted multi-channel multi-user (MCMU) communications are further explored to improve the service efficiency of the RIS and decrease the computation complexity. Besides, a Markov chain is built based on the proposed RIS-assisted MAC framework to analyze the system performance of SCMU/MCMU. Then the optimization problem is formulated to maximize the overall system capacity of SCMU/MCMU with energy-efficient constraint. The performance evaluations demonstrate the feasibility and effectiveness of each
Relocalization is a fundamental task in the field of robotics and computer vision. There is considerable work in the field of deep camera relocalization, which directly estimates poses from raw images. However, learning-based methods have not yet been applied to the radar sensory data. In this work, we investigate how to exploit deep learning to predict global poses from Emerging Frequency-Modulated Continuous Wave (FMCW) radar scans. Specifically, we propose a novel end-to-end neural network with self-attention, termed RadarLoc, which is able to estimate 6-DoF global poses directly. We also propose to improve the localization performance by utilizing geometric constraints between radar scans. We validate our approach on the recently released challenging outdoor dataset Oxford Radar RobotCar. Comprehensive experiments demonstrate that the proposed method outperforms radar-based localization and deep camera relocalization methods by a significant margin.
Reconfigurable intelligent surface (RIS) has become a promising technology for enhancing the reliability of wireless communications, which is capable of reflecting the desired signals through appropriate phase shifts. However, the intended signals that impinge upon an RIS are often mixed with interfering signals, which are usually dynamic and unknown. In particular, the received signal-to-interference-plus-noise ratio (SINR) may be degraded by the signals reflected from the RISs that originate from non-intended users. To tackle this issue, we introduce the concept of intelligent spectrum learning (ISL), which uses an appropriately trained convolutional neural network (CNN) at the RIS controller to help the RISs infer the interfering signals directly from the incident signals. By capitalizing on the ISL, a distributed control algorithm is proposed to maximize the received SINR by dynamically configuring the active/inactive binary status of the RIS elements. Simulation results validate the performance improvement offered by deep learning and demonstrate the superiority of the proposed ISL-aided approach.
In this paper, a multilingual end-to-end framework, called as ATCSpeechNet, is proposed to tackle the issue of translating communication speech into human-readable text in air traffic control (ATC) systems. In the proposed framework, we focus on integrating the multilingual automatic speech recognition (ASR) into one model, in which an end-to-end paradigm is developed to convert speech waveform into text directly, without any feature engineering or lexicon. In order to make up for the deficiency of the handcrafted feature engineering caused by ATC challenges, a speech representation learning (SRL) network is proposed to capture robust and discriminative speech representations from the raw wave. The self-supervised training strategy is adopted to optimize the SRL network from unlabeled data, and further to predict the speech features, i.e., wave-to-feature. An end-to-end architecture is improved to complete the ASR task, in which a grapheme-based modeling unit is applied to address the multilingual ASR issue. Facing the problem of small transcribed samples in the ATC domain, an unsupervised approach with mask prediction is applied to pre-train the backbone network of the ASR model on unlabeled data by a feature-to-feature process. Finally, by integrating the SRL with ASR, an end-to-end multilingual ASR framework is formulated in a supervised manner, which is able to translate the raw wave into text in one model, i.e., wave-to-text. Experimental results on the ATCSpeech corpus demonstrate that the proposed approach achieves a high performance with a very small labeled corpus and less resource consumption, only 4.20% label error rate on the 58-hour transcribed corpus. Compared to the baseline model, the proposed approach obtains over 100% relative performance improvement which can be further enhanced with the increasing of the size of the transcribed samples.
In the domain of air traffic control (ATC) systems, efforts to train a practical automatic speech recognition (ASR) model always faces the problem of small training samples since the collection and annotation of speech samples are expert- and domain-dependent task. In this work, a novel training approach based on pretraining and transfer learning is proposed to address this issue, and an improved end-to-end deep learning model is developed to address the specific challenges of ASR in the ATC domain. An unsupervised pretraining strategy is first proposed to learn speech representations from unlabeled samples for a certain dataset. Specifically, a masking strategy is applied to improve the diversity of the sample without losing their general patterns. Subsequently, transfer learning is applied to fine-tune a pretrained or other optimized baseline models to finally achieves the supervised ASR task. By virtue of the common terminology used in the ATC domain, the transfer learning task can be regarded as a sub-domain adaption task, in which the transferred model is optimized using a joint corpus consisting of baseline samples and new transcribed samples from the target dataset. This joint corpus construction strategy enriches the size and diversity of the training samples, which is important for addressing the issue of the small transcribed corpus. In addition, speed perturbation is applied to augment the new transcribed samples to further improve the quality of the speech corpus. Three real ATC datasets are used to validate the proposed ASR model and training strategies. The experimental results demonstrate that the ASR performance is significantly improved on all three datasets, with an absolute character error rate only one-third of that achieved through the supervised training. The applicability of the proposed strategies to other ASR approaches is also validated.
Static analysis tools are widely used for vulnerability detection as they understand programs with complex behavior and millions of lines of code. Despite their popularity, static analysis tools are known to generate an excess of false positives. The recent ability of Machine Learning models to understand programming languages opens new possibilities when applied to static analysis. However, existing datasets to train models for vulnerability identification suffer from multiple limitations such as limited bug context, limited size, and synthetic and unrealistic source code. We propose D2A, a differential analysis based approach to label issues reported by static analysis tools. The D2A dataset is built by analyzing version pairs from multiple open source projects. From each project, we select bug fixing commits and we run static analysis on the versions before and after such commits. If some issues detected in a before-commit version disappear in the corresponding after-commit version, they are very likely to be real bugs that got fixed by the commit. We use D2A to generate a large labeled dataset to train models for vulnerability identification. We show that the dataset can be used to build a classifier to identify possible false alarms among the issues reported by static analysis, hence helping developers prioritize and investigate potential true positives first.
With the prevalence of online social media, users' social connections have been widely studied and utilized to enhance the performance of recommender systems. In this paper, we explore the use of hyperbolic geometry for social recommendation. We present Hyperbolic Social Recommender (HSR), a novel social recommendation framework that utilizes hyperbolic geometry to boost the performance. With the help of hyperbolic spaces, HSR can learn high-quality user and item representations for better modeling user-item interaction and user-user social relations. Via a series of extensive experiments, we show that our proposed HSR outperforms its Euclidean counterpart and state-of-the-art social recommenders in click-through rate prediction and top-K recommendation, demonstrating the effectiveness of social recommendation in the hyperbolic space.
Speech emotion recognition (SER) is a key technology to enable more natural human-machine communication. However, SER has long suffered from a lack of public large-scale labeled datasets. To circumvent this problem, we investigate how unsupervised representation learning on unlabeled datasets can benefit SER. We show that the contrastive predictive coding (CPC) method can learn salient representations from unlabeled datasets, which improves emotion recognition performance. In our experiments, this method achieved state-of-the-art concordance correlation coefficient (CCC) performance for all emotion primitives (activation, valence, and dominance) on IEMOCAP. Additionally, on the MSP- Podcast dataset, our method obtained considerable performance improvements compared to baselines.