We introduce a framework for learning from multiple generated graph views, named graph symbiosis learning (GraphSym). In GraphSym, graph neural networks (GNN) developed in multiple generated graph views can adaptively exchange parameters with each other and fuse information stored in linkage structures and node features. Specifically, we propose a novel adaptive exchange method to iteratively substitute redundant channels in the weight matrix of one GNN with informative channels of another GNN in a layer-by-layer manner. GraphSym does not rely on specific methods to generate multiple graph views and GNN architectures. Thus, existing GNNs can be seamlessly integrated into our framework. On 3 semi-supervised node classification datasets, GraphSym outperforms previous single-graph and multiple-graph GNNs without knowledge distillation, and achieves new state-of-the-art results. We also conduct a series of experiments on 15 public benchmarks, 8 popular GNN models, and 3 graph tasks -- node classification, graph classification, and edge prediction -- and show that GraphSym consistently achieves better performance than existing popular GNNs by 1.9\%$\sim$3.9\% on average and their ensembles. Extensive ablation studies and experiments on the few-shot setting also demonstrate the effectiveness of GraphSym.
Pairwise learning focuses on learning tasks with pairwise loss functions, depends on pairs of training instances, and naturally fits for modeling relationships between pairs of samples. In this paper, we focus on the privacy of pairwise learning and propose a new differential privacy paradigm for pairwise learning, based on gradient perturbation. Except for the privacy guarantees, we also analyze the excess population risk and give corresponding bounds under both expectation and high probability conditions. We use the \textit{on-average stability} and the \textit{pairwise locally elastic stability} theories to analyze the expectation bound and the high probability bound, respectively. Moreover, our analyzed utility bounds do not require convex pairwise loss functions, which means that our method is general to both convex and non-convex conditions. Under these circumstances, the utility bounds are similar to (or better than) previous bounds under convexity or strongly convexity assumption, which are attractive results.
Inspired by biological evolution, we explain the rationality of Vision Transformer by analogy with the proven practical Evolutionary Algorithm (EA) and derive that both of them have consistent mathematical representation. Analogous to the dynamic local population in EA, we improve the existing transformer structure and propose a more efficient EAT model, and design task-related heads to deal with different tasks more flexibly. Moreover, we introduce the spatial-filling curve into the current vision transformer to sequence image data into a uniform sequential format. Thus we can design a unified EAT framework to address multi-modal tasks, separating the network architecture from the data format adaptation. Our approach achieves state-of-the-art results on the ImageNet classification task compared with recent vision transformer works while having smaller parameters and greater throughput. We further conduct multi-model tasks to demonstrate the superiority of the unified EAT, e.g., Text-Based Image Retrieval, and our approach improves the rank-1 by +3.7 points over the baseline on the CSS dataset.
While pre-trained language models (e.g., BERT) have achieved impressive results on different natural language processing tasks, they have large numbers of parameters and suffer from big computational and memory costs, which make them difficult for real-world deployment. Therefore, model compression is necessary to reduce the computation and memory cost of pre-trained models. In this work, we aim to compress BERT and address the following two challenging practical issues: (1) The compression algorithm should be able to output multiple compressed models with different sizes and latencies, in order to support devices with different memory and latency limitations; (2) The algorithm should be downstream task agnostic, so that the compressed models are generally applicable for different downstream tasks. We leverage techniques in neural architecture search (NAS) and propose NAS-BERT, an efficient method for BERT compression. NAS-BERT trains a big supernet on a search space containing a variety of architectures and outputs multiple compressed models with adaptive sizes and latency. Furthermore, the training of NAS-BERT is conducted on standard self-supervised pre-training tasks (e.g., masked language model) and does not depend on specific downstream tasks. Thus, the compressed models can be used across various downstream tasks. The technical challenge of NAS-BERT is that training a big supernet on the pre-training task is extremely costly. We employ several techniques including block-wise search, search space pruning, and performance approximation to improve search efficiency and accuracy. Extensive experiments on GLUE and SQuAD benchmark datasets demonstrate that NAS-BERT can find lightweight models with better accuracy than previous approaches, and can be directly applied to different downstream tasks with adaptive model sizes for different requirements of memory or latency.
Pairwise learning focuses on learning tasks with pairwise loss functions, which depend on pairs of training instances, and naturally fits for modeling relationships between pairs of samples. In this paper, we focus on the privacy of pairwise learning and propose a new differential privacy paradigm for pairwise learning, based on gradient perturbation. We analyze the privacy guarantees from two points of view: the $\ell_2$-sensitivity and the moments accountant method. We further analyze the generalization error, the excess empirical risk, and the excess population risk of our proposed method and give corresponding bounds. By introducing algorithmic stability theory to pairwise differential privacy, our theoretical analysis does not require convex pairwise loss functions, which means that our method is general to both convex and non-convex conditions. Under these circumstances, the utility bounds are better than previous bounds under convexity or strongly convexity assumption, which is an attractive result.
We consider the problem of range-Doppler imaging using one-bit automotive LFMCW1 or PMCW radar that utilizes one-bit ADC sampling with time-varying thresholds at the receiver. The one-bit sampling technique can significantly reduce the cost as well as the power consumption of automotive radar systems. We formulate the one-bit LFMCW/PMCW radar rangeDoppler imaging problem as one-bit sparse parameter estimation. The recently proposed hyperparameter-free (and hence user friendly) weighted SPICE algorithms, including SPICE, LIKES, SLIM and IAA, achieve excellent parameter estimation performance for data sampled with high precision. However, these algorithms cannot be used directly for one-bit data. In this paper we first present a regularized minimization algorithm, referred to as 1bSLIM, for accurate range-Doppler imaging using onebit radar systems. Then, we describe how to extend the SPICE, LIKES and IAA algorithms to the one-bit data case, and refer to these extensions as 1bSPICE, 1bLIKES and 1bIAA. These onebit hyperparameter-free algorithms are unified within the one-bit weighted SPICE framework. Moreover, efficient implementations of the aforementioned algorithms are investigated that rely heavily on the use of FFTs. Finally, both simulated and experimental examples are provided to demonstrate the effectiveness of the proposed algorithms for range-Doppler imaging using one-bit automotive radar systems.
We consider the problem of sinusoidal parameter estimation using signed observations obtained via one-bit sampling with fixed as well as time-varying thresholds. In a previous paper, a relaxation-based algorithm, referred to as 1bRELAX, has been proposed to iteratively maximize the likelihood function. However, the exhaustive search procedure used in each iteration of 1bRELAX is time-consuming. In this paper, we present a majorization-minimization (MM) based 1bRELAX algorithm, referred to as 1bMMRELAX, to enhance the computational efficiency of 1bRELAX. Using the MM technique, 1bMMRELAX maximizes the likelihood function iteratively using simple FFT operations instead of the more computationally intensive search used by 1bRELAX. Both simulated and experimental results are presented to show that 1bMMRELAX can significantly reduce the computational cost of 1bRELAX while maintaining its excellent estimation accuracy.
Radio frequency interference (RFI) mitigation and radar echo recovery are critically important for the proper functioning of ultra-wideband (UWB) radar systems using one-bit sampling techniques. We recently introduced a technique for one-bit UWB radar, which first uses a majorization-minimization method for RFI parameter estimation followed by a sparse method for radar echo recovery. However, this technique suffers from high computational complexity due to the need to estimate the parameters of each RFI source separately and iteratively. In this paper, we present a computationally efficient joint RFI mitigation and radar echo recovery framework to greatly reduce the computational cost. Specifically, we exploit the sparsity of RFI in the fast-frequency domain and the sparsity of radar echoes in the fast-time domain to design a one-bit weighted SPICE (SParse Iterative Covariance-based Estimation) based framework for the joint RFI mitigation and radar echo recovery of one-bit UWB radar. Both simulated and experimental results are presented to show that the proposed one-bit weighted SPICE framework can not only reduce the computational cost but also outperform the existing approach for decoupled RFI mitigation and radar echo recovery of one-bit UWB radar.
The outbreak of novel coronavirus pneumonia (COVID-19) has caused mortality and morbidity worldwide. Oropharyngeal-swab (OP-swab) sampling is widely used for the diagnosis of COVID-19 in the world. To avoid the clinical staff from being affected by the virus, we developed a 9-degree-of-freedom (DOF) rigid-flexible coupling (RFC) robot to assist the COVID-19 OP-swab sampling. This robot is composed of a visual system, UR5 robot arm, micro-pneumatic actuator and force-sensing system. The robot is expected to reduce risk and free up the clinical staff from the long-term repetitive sampling work. Compared with a rigid sampling robot, the developed force-sensing RFC robot can facilitate OP-swab sampling procedures in a safer and softer way. In addition, a varying-parameter zeroing neural network-based optimization method is also proposed for motion planning of the 9-DOF redundant manipulator. The developed robot system is validated by OP-swab sampling on both oral cavity phantoms and volunteers.
Recently, various auxiliary tasks have been proposed to accelerate representation learning and improve sample efficiency in deep reinforcement learning (RL). However, existing auxiliary tasks do not take the characteristics of RL problems into consideration and are unsupervised. By leveraging returns, the most important feedback signals in RL, we propose a novel auxiliary task that forces the learnt representations to discriminate state-action pairs with different returns. Our auxiliary loss is theoretically justified to learn representations that capture the structure of a new form of state-action abstraction, under which state-action pairs with similar return distributions are aggregated together. In low data regime, our algorithm outperforms strong baselines on complex tasks in Atari games and DeepMind Control suite, and achieves even better performance when combined with existing auxiliary tasks.