Adam is a widely used optimization method for training deep learning models. It computes individual adaptive learning rates for different parameters. In this paper, we propose a generalization of Adam, called Adambs, that allows us to also adapt to different training examples based on their importance in the model's convergence. To achieve this, we maintain a distribution over all examples, selecting a mini-batch in each iteration by sampling according to this distribution, which we update using a multi-armed bandit algorithm. This ensures that examples that are more beneficial to the model training are sampled with higher probabilities. We theoretically show that Adambs improves the convergence rate of Adam---$O(\sqrt{\frac{\log n}{T} })$ instead of $O(\sqrt{\frac{n}{T}})$ in some cases. Experiments on various models and datasets demonstrate Adambs's fast convergence in practice.
Attention-based end-to-end text-to-speech synthesis (TTS) is superior to conventional statistical methods in many ways. Transformer-based TTS is one of such successful implementations. While Transformer TTS models the speech frame sequence well with a self-attention mechanism, it does not associate input text with output utterances from a syntactic point of view at sentence level. We propose a novel neural TTS model, denoted as GraphSpeech, that is formulated under graph neural network framework. GraphSpeech encodes explicitly the syntactic relation of input lexical tokens in a sentence, and incorporates such information to derive syntactically motivated character embeddings for TTS attention mechanism. Experiments show that GraphSpeech consistently outperforms the Transformer TTS baseline in terms of spectrum and prosody rendering of utterances.
Given a stream of graph edges from a dynamic graph, how can we assign anomaly scores to edges in an online manner, for the purpose of detecting unusual behavior, using constant time and memory? Existing approaches aim to detect individually surprising edges. In this work, we propose MIDAS, which focuses on detecting microcluster anomalies, or suddenly arriving groups of suspiciously similar edges, such as lockstep behavior, including denial of service attacks in network traffic data. We further propose MIDAS-F, to solve the problem by which anomalies are incorporated into the algorithm's internal states, creating a 'poisoning' effect which can allow future anomalies to slip through undetected. MIDAS-F introduces two modifications: 1) We modify the anomaly scoring function, aiming to reduce the 'poisoning' effect of newly arriving edges; 2) We introduce a conditional merge step, which updates the algorithm's data structures after each time tick, but only if the anomaly score is below a threshold value, also to reduce the `poisoning' effect. Experiments show that MIDAS-F has significantly higher accuracy than MIDAS. MIDAS has the following properties: (a) it detects microcluster anomalies while providing theoretical guarantees about its false positive probability; (b) it is online, thus processing each edge in constant time and constant memory, and also processes the data 130 to 929 times faster than state-of-the-art approaches; (c) it provides 41% to 55% higher accuracy (in terms of ROC-AUC) than state-of-the-art approaches.
We study alternating minimization for matrix completion in the simplest possible setting: completing a rank-one matrix from a revealed subset of the entries. We bound the asymptotic convergence rate by the variational characterization of eigenvalues of a reversible consensus problem. This leads to a polynomial upper bound on the asymptotic rate in terms of number of nodes as well as the largest degree of the graph of revealed entries.
Tacotron-based end-to-end speech synthesis has shown remarkable voice quality. However, the rendering of prosody in the synthesized speech remains to be improved, especially for long sentences, where prosodic phrasing errors can occur frequently. In this paper, we extend the Tacotron-based speech synthesis framework to explicitly model the prosodic phrase breaks. We propose a multi-task learning scheme for Tacotron training, that optimizes the system to predict both Mel spectrum and phrase breaks. To our best knowledge, this is the first implementation of multi-task learning for Tacotron based TTS with a prosodic phrasing model. Experiments show that our proposed training scheme consistently improves the voice quality for both Chinese and Mongolian systems.
Recently, there has been increasing interest in transparency and interpretability in Deep Reinforcement Learning (DRL) systems. Verbal explanations, as the most natural way of communication in our daily life, deserve more attention, since they allow users to gain a better understanding of the system which ultimately could lead to a high level of trust and smooth collaboration. This paper reports a novel work in generating verbal explanations for DRL behaviors agent. A rule-based model is designed to construct explanations using a series of rules which are predefined with prior knowledge. A learning model is then proposed to expand the implicit logic of generating verbal explanation to general situations by employing rule-based explanations as training data. The learning model is shown to have better flexibility and generalizability than the static rule-based model. The performance of both models is evaluated quantitatively through objective metrics. The results show that verbal explanation generated by both models improve subjective satisfaction of users towards the interpretability of DRL systems. Additionally, seven variants of the learning model are designed to illustrate the contribution of input channels, attention mechanism, and proposed encoder in improving the quality of verbal explanation.
With the advantages of member diversity and team scale, heterogeneous multi-robot systems (HMRS) are widely used in complex scenarios, including disaster search and rescue, site surveillance, and traffic control. However, due to the variety of task requirements, it is still challenging to accurately allocate limited team capability to satisfy various task needs effectively. In this paper, a novel adaptive cooperation method, inner attention innerATT is developed to flexibly team heterogeneous robots to execute tasks as task needs change. innerATT is designed based on an attention mechanism and a multi-agent actor-critic reinforcement learning algorithm. We briefly validate how the inner attention mechanism can be exploited to enable flexible and robust decision making in guiding cooperation. The results, in two designed scenarios "task variety" and "robot availability variety", show that innerATT can enable flexible cooperation and reduce resource consumption in search and rescue tasks.
A human-swarm cooperative system, which mixes multiple robots and a human supervisor to form a heterogeneous team, is widely used for emergent scenarios such as criminal tracking in social security and victim assistance in a natural disaster. These emergent scenarios require a cooperative team to quickly terminate the current task and transit the system to a new task, bringing difficulty in motion planning. Moreover, due to the immediate task transitions, uncertainty from both physical systems and prior tasks is accumulated to decrease swarm performance, causing robot failures and influencing the cooperation effectiveness between the human and the robot swarm. Therefore, given the quick-transition requirements and the introduced uncertainty, it is challenging for a human-swarm system to respond to emergent tasks, compared with executing normal tasks where a gradual transition between tasks is allowed. Human trust reveals the behavior expectations of others and is used to adjust unsatisfactory behaviors for better cooperation. Inspired by human trust, in this paper, a trust-aware reflective control (Trust-R) is developed to dynamically calibrate human-swarm cooperation. Trust-R, based on a weighted mean subsequence reduced algorithm (WMSR) and human trust modeling, helps a swarm to self-reflect its performance from the perspective of human trust; then proactively correct its faulty behaviors in an early stage before a human intervenes. One typical task scenario {emergency response} was designed in the real-gravity simulation environment, and a human user study with 145 volunteers was conducted. Trust-R's effectiveness in correcting faulty behaviors in emergency response was validated by the improved swarm performance and increased trust scores.
Making accurate multi-step-ahead prediction for a complex system is a challenge for many practical applications, especially when only short-term time-series data are available. In this work, we proposed a novel framework, Delay-Embedding-based Forecast Machine (DEFM), to predict the future values of a target variable in an accurate and multi-step-ahead manner based on the high-dimensional short-term measurements. With a three-module spatiotemporal architecture, DEFM leverages deep learning to effectively extract both the spatially and sequentially associated information from the short-term dynamics even with time-varying parameters or additive noise. Being trained through a self-supervised scheme, DEFM well fits a nonlinear transformation that maps from the observed high-dimensional information to the delay embeddings of a target variable, thus predicting the future information. The effectiveness and accuracy of DEFM is demonstrated by applications on both representative models and six real-world datasets. The comparison with four traditional prediction methods exhibits the superiority and robustness of DEFM.
Large-scale synthetic datasets are beneficial to stereo matching but usually introduce known domain bias. Although unsupervised image-to-image translation networks represented by CycleGAN show great potential in dealing with domain gap, it is non-trivial to generalize this method to stereo matching due to the problem of pixel distortion and stereo mismatch after translation. In this paper, we propose an end-to-end training framework with domain translation and stereo matching networks to tackle this challenge. First, joint optimization between domain translation and stereo matching networks in our end-to-end framework makes the former facilitate the latter one to the maximum extent. Second, this framework introduces two novel losses, i.e., bidirectional multi-scale feature re-projection loss and correlation consistency loss, to help translate all synthetic stereo images into realistic ones as well as maintain epipolar constraints. The effective combination of above two contributions leads to impressive stereo-consistent translation and disparity estimation accuracy. In addition, a mode seeking regularization term is added to endow the synthetic-to-real translation results with higher fine-grained diversity. Extensive experiments demonstrate the effectiveness of the proposed framework on bridging the synthetic-to-real domain gap on stereo matching.