For multiple Unmanned-Aerial-Vehicles (UAVs) assisted Mobile Edge Computing (MEC) networks, we study the problem of combined computation and communication for user equipments deployed with multi-type tasks. Specifically, we consider that the MEC network encompasses both communication and computation uncertainties, where the partial channel state information and the inaccurate estimation of task complexity are only available. We introduce a robust design accounting for these uncertainties and minimize the total weighted energy consumption by jointly optimizing UAV trajectory, task partition, as well as the computation and communication resource allocation in the multi-UAV scenario. The formulated problem is challenging to solve with the coupled optimization variables and the high uncertainties. To overcome this issue, we reformulate a multi-agent Markov decision process and propose a multi-agent proximal policy optimization with Beta distribution framework to achieve a flexible learning policy. Numerical results demonstrate the effectiveness and robustness of the proposed algorithm for the multi-UAV-assisted MEC network, which outperforms the representative benchmarks of the deep reinforcement learning and heuristic algorithms.
Existing NTMs with contrastive learning suffer from the sample bias problem owing to the word frequency-based sampling strategy, which may result in false negative samples with similar semantics to the prototypes. In this paper, we aim to explore the efficient sampling strategy and contrastive learning in NTMs to address the aforementioned issue. We propose a new sampling assumption that negative samples should contain words that are semantically irrelevant to the prototype. Based on it, we propose the graph contrastive topic model (GCTM), which conducts graph contrastive learning (GCL) using informative positive and negative samples that are generated by the graph-based sampling strategy leveraging in-depth correlation and irrelevance among documents and words. In GCTM, we first model the input document as the document word bipartite graph (DWBG), and construct positive and negative word co-occurrence graphs (WCGs), encoded by graph neural networks, to express in-depth semantic correlation and irrelevance among words. Based on the DWBG and WCGs, we design the document-word information propagation (DWIP) process to perform the edge perturbation of DWBG, based on multi-hop correlations/irrelevance among documents and words. This yields the desired negative and positive samples, which will be utilized for GCL together with the prototypes to improve learning document topic representations and latent topics. We further show that GCL can be interpreted as the structured variational graph auto-encoder which maximizes the mutual information of latent topic representations of different perspectives on DWBG. Experiments on several benchmark datasets demonstrate the effectiveness of our method for topic coherence and document representation learning compared with existing SOTA methods.
In the paper, we investigate the coordination process of sensing and computation offloading in a reconfigurable intelligent surface (RIS)-aided base station (BS)-centric symbiotic radio (SR) systems. Specifically, the Internet-of-Things (IoT) devices first sense data from environment and then tackle the data locally or offload the data to BS for remote computing, while RISs are leveraged to enhance the quality of blocked channels and also act as IoT devices to transmit its sensed data. To explore the mechanism of cooperative sensing and computation offloading in this system, we aim at maximizing the total completed sensed bits of all users and RISs by jointly optimizing the time allocation parameter, the passive beamforming at each RIS, the transmit beamforming at BS, and the energy partition parameters for all users subject to the size of sensed data, energy supply and given time cycle. The formulated nonconvex problem is tightly coupled by the time allocation parameter and involves the mathematical expectations, which cannot be solved straightly. We use Monte Carlo and fractional programming methods to transform the nonconvex objective function and then propose an alternating optimization-based algorithm to find an approximate solution with guaranteed convergence. Numerical results show that the RIS-aided SR system outperforms other benchmarks in sensing. Furthermore, with the aid of RIS, the channel and system performance can be significantly improved.
Label-noise learning (LNL) aims to increase the model's generalization given training data with noisy labels. To facilitate practical LNL algorithms, researchers have proposed different label noise types, ranging from class-conditional to instance-dependent noises. In this paper, we introduce a novel label noise type called BadLabel, which can significantly degrade the performance of existing LNL algorithms by a large margin. BadLabel is crafted based on the label-flipping attack against standard classification, where specific samples are selected and their labels are flipped to other labels so that the loss values of clean and noisy labels become indistinguishable. To address the challenge posed by BadLabel, we further propose a robust LNL method that perturbs the labels in an adversarial manner at each epoch to make the loss values of clean and noisy labels again distinguishable. Once we select a small set of (mostly) clean labeled data, we can apply the techniques of semi-supervised learning to train the model accurately. Empirically, our experimental results demonstrate that existing LNL algorithms are vulnerable to the newly introduced BadLabel noise type, while our proposed robust LNL method can effectively improve the generalization performance of the model under various types of label noise. The new dataset of noisy labels and the source codes of robust LNL algorithms are available at https://github.com/zjfheart/BadLabels.
The generalized linear system (GLS) has been widely used in wireless communications to evaluate the effect of nonlinear preprocessing on receiver performance. Generalized approximation message passing (AMP) is a state-of-the-art algorithm for the signal recovery of GLS, but it was limited to measurement matrices with independent and identically distributed (IID) elements. To relax this restriction, generalized orthogonal/vector AMP (GOAMP/GVAMP) for unitarily-invariant measurement matrices was established, which has been proven to be replica Bayes optimal in uncoded GLS. However, the information-theoretic limit of GOAMP/GVAMP is still an open challenge for arbitrary input distributions due to its complex state evolution (SE). To address this issue, in this paper, we provide the achievable rate analysis of GOAMP/GVAMP in GLS, establishing its information-theoretic limit (i.e., maximum achievable rate). Specifically, we transform the fully-unfolded state evolution (SE) of GOAMP/GVAMP into an equivalent single-input single-output variational SE (VSE). Using the VSE and the mutual information and minimum mean-square error (I-MMSE) lemma, the achievable rate of GOAMP/GVAMP is derived. Moreover, the optimal coding principle for maximizing the achievable rate is proposed, based on which a kind of low-density parity-check (LDPC) code is designed. Numerical results verify the achievable rate advantages of GOAMP/GVAMP over the conventional maximum ratio combining (MRC) receiver based on the linearized model and the BER performance gains of the optimized LDPC codes (0.8~2.8 dB) compared to the existing methods.
Dialogue systems for non-English languages have long been under-explored. In this paper, we take the first step to investigate few-shot cross-lingual transfer learning (FS-XLT) and multitask learning (MTL) in the context of open-domain dialogue generation for non-English languages with limited data. We observed catastrophic forgetting in both FS-XLT and MTL for all 6 languages in our preliminary experiments. To mitigate the issue, we propose a simple yet effective prompt learning approach that can preserve the multilinguality of multilingual pre-trained language model (mPLM) in FS-XLT and MTL by bridging the gap between pre-training and fine-tuning with Fixed-prompt LM Tuning and our hand-crafted prompts. Experimental results on all 6 languages in terms of both automatic and human evaluations demonstrate the effectiveness of our approach. Our code is available at https://github.com/JeremyLeiLiu/XLinguDial.
Approximate message passing (AMP) algorithms break a (high-dimensional) statistical problem into parts then repeatedly solve each part in turn, akin to alternating projections. A distinguishing feature is their asymptotic behaviours can be accurately predicted via their associated state evolution equations. Orthogonal AMP (OAMP) was recently developed to avoid the need for computing the so-called Onsager term in traditional AMP algorithms, providing two clear benefits: the derivation of an OAMP algorithm is both straightforward and more broadly applicable. OAMP was originally demonstrated for statistical problems with a single measurement vector and single transform. This paper extends OAMP to statistical problems with multiple measurement vectors (MMVs) and multiple transforms (MTs). We name the resulting algorithms as OAMP-MMV and OAMP-MT respectively, and their combination as augmented OAMP (A-OAMP). Whereas the extension of traditional AMP algorithms to such problems would be challenging, the orthogonal principle underpinning OAMP makes these extensions straightforward. The MMV and MT models are widely applicable to signal processing and communications. We present an example of MIMO relay system with correlated source data and signal clipping, which can be modelled as a joint MMV-MT system. While existing methods meet with difficulties in this example, OAMP offers an efficient solution with excellent performance.
Efficient signal detectors are rather important yet challenging to achieve satisfactory performance for large-scale communication systems. This paper considers a non-orthogonal sparse code multiple access (SCMA) configuration for multiple-input multiple-output (MIMO) systems with recently proposed orthogonal time frequency space (OTFS) modulation. We develop a novel low-complexity yet effective customized Memory approximate message passing (AMP) algorithm for channel equalization and multi-user detection. Specifically, the proposed Memory AMP detector enjoys the sparsity of the channel matrix and only applies matrix-vector multiplications in each iteration for low-complexity. To alleviate the performance degradation caused by positive reinforcement problem in the iterative process, all the preceding messages are utilized to guarantee the orthogonality principle in Memory AMP detector. Simulation results are finally provided to illustrate the superiority of our Memory AMP detector over the existing solutions.
Intelligent medical diagnosis has shown remarkable progress based on the large-scale datasets with precise annotations. However, fewer labeled images are available due to significantly expensive cost for annotating data by experts. To fully exploit the easily available unlabeled data, we propose a novel Spatio-Temporal Structure Consistent (STSC) learning framework. Specifically, a gram matrix is derived to combine the spatial structure consistency and temporal structure consistency together. This gram matrix captures the structural similarity among the representations of different training samples. At the spatial level, our framework explicitly enforces the consistency of structural similarity among different samples under perturbations. At the temporal level, we consider the consistency of the structural similarity in different training iterations by digging out the stable sub-structures in a relation graph. Experiments on two medical image datasets (i.e., ISIC 2018 challenge and ChestX-ray14) show that our method outperforms state-of-the-art SSL methods. Furthermore, extensive qualitative analysis on the Gram matrices and heatmaps by Grad-CAM are presented to validate the effectiveness of our method.
This study aims at improving the performance of scoring student responses in science education automatically. BERT-based language models have shown significant superiority over traditional NLP models in various language-related tasks. However, science writing of students, including argumentation and explanation, is domain-specific. In addition, the language used by students is different from the language in journals and Wikipedia, which are training sources of BERT and its existing variants. All these suggest that a domain-specific model pre-trained using science education data may improve model performance. However, the ideal type of data to contextualize pre-trained language model and improve the performance in automatically scoring student written responses remains unclear. Therefore, we employ different data in this study to contextualize both BERT and SciBERT models and compare their performance on automatic scoring of assessment tasks for scientific argumentation. We use three datasets to pre-train the model: 1) journal articles in science education, 2) a large dataset of students' written responses (sample size over 50,000), and 3) a small dataset of students' written responses of scientific argumentation tasks. Our experimental results show that in-domain training corpora constructed from science questions and responses improve language model performance on a wide variety of downstream tasks. Our study confirms the effectiveness of continual pre-training on domain-specific data in the education domain and demonstrates a generalizable strategy for automating science education tasks with high accuracy. We plan to release our data and SciEdBERT models for public use and community engagement.