For advanced driver assistance systems, it is crucial to have information about oncoming vehicles as early as possible. At night, this task is especially difficult due to poor lighting conditions. For that, during nighttime, every vehicle uses headlamps to improve sight and therefore ensure safe driving. As humans, we intuitively assume oncoming vehicles before the vehicles are actually physically visible by detecting light reflections caused by their headlamps. In this paper, we present a novel dataset containing 59746 annotated grayscale images out of 346 different scenes in a rural environment at night. In these images, all oncoming vehicles, their corresponding light objects (e.g., headlamps), and their respective light reflections (e.g., light reflections on guardrails) are labeled. This is accompanied by an in-depth analysis of the dataset characteristics. With that, we are providing the first open-source dataset with comprehensive ground truth data to enable research into new methods of detecting oncoming vehicles based on the light reflections they cause, long before they are directly visible. We consider this as an essential step to further close the performance gap between current advanced driver assistance systems and human behavior.
Since its discovery in 2013, the phenomenon of adversarial examples has attracted a growing amount of attention from the machine learning community. A deeper understanding of the problem could lead to a better comprehension of how information is processed and encoded in neural networks and, more in general, could help to solve the issue of interpretability in machine learning. Our idea to increase adversarial resilience starts with the observation that artificial neurons can be divided in two broad categories: AND-like neurons and OR-like neurons. Intuitively, the former are characterised by a relatively low number of combinations of input values which trigger neuron activation, while for the latter the opposite is true. Our hypothesis is that the presence in a network of a sufficiently high number of OR-like neurons could lead to classification "brittleness" and increase the network's susceptibility to adversarial attacks. After constructing an operational definition of a neuron AND-like behaviour, we proceed to introduce several measures to increase the proportion of AND-like neurons in the network: L1 norm weight normalisation; application of an input filter; comparison between the neuron output's distribution obtained when the network is fed with the actual data set and the distribution obtained when the network is fed with a randomised version of the former called "scrambled data set". Tests performed on the MNIST data set hint that the proposed measures could represent an interesting direction to explore.
Accurate and efficient product classification is significant for E-commerce applications, as it enables various downstream tasks such as recommendation, retrieval, and pricing. Items often contain textual and visual information, and utilizing both modalities usually outperforms classification utilizing either mode alone. In this paper we describe our methodology and results for the SIGIR eCom Rakuten Data Challenge. We employ a dual attention technique to model image-text relationships using pretrained language and image embeddings. While dual attention has been widely used for Visual Question Answering(VQA) tasks, ours is the first attempt to apply the concept for multimodal classification.
A challenge for rescue teams when fighting against wildfire in remote areas is the lack of information, such as the size and images of fire areas. As such, live streaming from Unmanned Aerial Vehicles (UAVs), capturing videos of dynamic fire areas, is crucial for firefighter commanders in any location to monitor the fire situation with quick response. The 5G network is a promising wireless technology to support such scenarios. In this paper, we consider a UAV-to-UAV (U2U) communication scenario, where a UAV at a high altitude acts as a mobile base station (UAV-BS) to stream videos from other flying UAV-users (UAV-UEs) through the uplink. Due to the mobility of the UAV-BS and UAV-UEs, it is important to determine the optimal movements and transmission powers for UAV-BSs and UAV-UEs in real-time, so as to maximize the data rate of video transmission with smoothness and low latency, while mitigating the interference according to the dynamics in fire areas and wireless channel conditions. In this paper, we co-design the video resolution, the movement, and the power control of UAV-BS and UAV-UEs to maximize the Quality of Experience (QoE) of real-time video streaming. To learn the Deep Q-Network (DQN) and Actor-Critic (AC) to maximize the QoE of video transmission from all UAV-UEs to a single UAVBS. Simulation results show the effectiveness of our proposed algorithm in terms of the QoE, delay and video smoothness as compared to the Greedy algorithm.
Saliency detection based on the complementary information from RGB images and depth maps has recently gained great popularity. In this paper, we propose Complementary Attention and Adaptive Integration Network (CAAI-Net), a novel RGB-D saliency detection model that integrates complementary attention based feature concentration and adaptive cross-modal feature fusion into a unified framework for accurate saliency detection. Specifically, we propose a context-aware complementary attention (CCA) module, which consists of a feature interaction component, a complementary attention component, and a global-context component. The CCA module first utilizes the feature interaction component to extract rich local context features. The resulting features are then fed into the complementary attention component, which employs the complementary attention generated from adjacent levels to guide the attention at the current layer so that the mutual background disturbances are suppressed and the network focuses more on the areas with salient objects. Finally, we utilize a specially-designed adaptive feature integration (AFI) module, which sufficiently considers the low-quality issue of depth maps, to aggregate the RGB and depth features in an adaptive manner. Extensive experiments on six challenging benchmark datasets demonstrate that CAAI-Net is an effective saliency detection model and outperforms nine state-of-the-art models in terms of four widely-used metrics. In addition, extensive ablation studies confirm the effectiveness of the proposed CCA and AFI modules.
This work studies the entropic regularization formulation of the 2-Wasserstein distance on an infinite-dimensional Hilbert space, in particular for the Gaussian setting. We first present the Minimum Mutual Information property, namely the joint measures of two Gaussian measures on Hilbert space with the smallest mutual information are joint Gaussian measures. This is the infinite-dimensional generalization of the Maximum Entropy property of Gaussian densities on Euclidean space. We then give closed form formulas for the optimal entropic transport plan, 2-Wasserstein distance, and Sinkhorn divergence between two Gaussian measures on a Hilbert space, along with the fixed point equations for the barycenter of a set of Gaussian measures. Our formulations fully exploit the regularization aspect of the entropic formulation and are valid both in singular and nonsingular settings. In the infinite-dimensional setting, both the entropic 2-Wasserstein distance and Sinkhorn divergence are Fr\'echet differentiable, in contrast to the exact 2-Wasserstein distance, which is not differentiable. Our Sinkhorn barycenter equation is new and always has a unique solution. In contrast, the finite-dimensional barycenter equation for the entropic 2-Wasserstein distance fails to generalize to the Hilbert space setting. In the setting of reproducing kernel Hilbert spaces (RKHS), our distance formulas are given explicitly in terms of the corresponding kernel Gram matrices, providing an interpolation between the kernel Maximum Mean Discrepancy (MMD) and the kernel 2-Wasserstein distance.
Change detection, which aims to distinguish surface changes based on bi-temporal images, plays a vital role in ecological protection and urban planning. Since high resolution (HR) images cannot be typically acquired continuously over time, bi-temporal images with different resolutions are often adopted for change detection in practical applications. Traditional subpixel-based methods for change detection using images with different resolutions may lead to substantial error accumulation when HR images are employed; this is because of intraclass heterogeneity and interclass similarity. Therefore, it is necessary to develop a novel method for change detection using images with different resolutions, that is more suitable for HR images. To this end, we propose a super-resolution-based change detection network (SRCDNet) with a stacked attention module. The SRCDNet employs a super resolution (SR) module containing a generator and a discriminator to directly learn SR images through adversarial learning and overcome the resolution difference between bi-temporal images. To enhance the useful information in multi-scale features, a stacked attention module consisting of five convolutional block attention modules (CBAMs) is integrated to the feature extractor. The final change map is obtained through a metric learning-based change decision module, wherein a distance map between bi-temporal features is calculated. The experimental results demonstrate the superiority of the proposed method, which not only outperforms all baselines -with the highest F1 scores of 87.40% on the building change detection dataset and 92.94% on the change detection dataset -but also obtains the best accuracies on experiments performed with images having a 4x and 8x resolution difference. The source code of SRCDNet will be available at https://github.com/liumency/SRCDNet.
In the design optimization of metal forming, it is increasingly significant to use surrogate models to analyse the finite element analysis (FEA) simulations. However, traditional surrogate models using scalar based machine learning methods (SBMLMs) fall in short of accuracy and generalizability. This is because SBMLMs fail to harness the location information of the simulations. To overcome these shortcomings, image based machine learning methods (IBMLMs) are leveraged in this paper. The underlying theory of location information, which supports the advantages of IBMLM, is qualitatively interpreted. Based on this theory, a Res-SE-U-Net IBMLM surrogate model is developed and compared with a multi-layer perceptron (MLP) as a referencing SBMLM surrogate model. It is demonstrated that the IBMLM model is advantageous over the MLP SBMLM model in accuracy, generalizability, robustness, and informativeness. This paper presents a promising methodology of leveraging IBMLMs in surrogate models to make maximum use of info from FEA results. Future prospective studies that inspired by this paper are also discussed.
We propose a new framework, Translation between Augmented Natural Languages (TANL), to solve many structured prediction language tasks including joint entity and relation extraction, nested named entity recognition, relation classification, semantic role labeling, event extraction, coreference resolution, and dialogue state tracking. Instead of tackling the problem by training task-specific discriminative classifiers, we frame it as a translation task between augmented natural languages, from which the task-relevant information can be easily extracted. Our approach can match or outperform task-specific models on all tasks, and in particular, achieves new state-of-the-art results on joint entity and relation extraction (CoNLL04, ADE, NYT, and ACE2005 datasets), relation classification (FewRel and TACRED), and semantic role labeling (CoNLL-2005 and CoNLL-2012). We accomplish this while using the same architecture and hyperparameters for all tasks and even when training a single model to solve all tasks at the same time (multi-task learning). Finally, we show that our framework can also significantly improve the performance in a low-resource regime, thanks to better use of label semantics.
Breast cancer is among the most deadly diseases, distressing mostly women worldwide. Although traditional methods for detection have presented themselves as valid for the task, they still commonly present low accuracies and demand considerable time and effort from professionals. Therefore, a computer-aided diagnosis (CAD) system capable of providing early detection becomes hugely desirable. In the last decade, machine learning-based techniques have been of paramount importance in this context, since they are capable of extracting essential information from data and reasoning about it. However, such approaches still suffer from imbalanced data, specifically on medical issues, where the number of healthy people samples is, in general, considerably higher than the number of patients. Therefore this paper proposes the $\text{O}^2$PF, a data oversampling method based on the unsupervised Optimum-Path Forest Algorithm. Experiments conducted over the full oversampling scenario state the robustness of the model, which is compared against three well-established oversampling methods considering three breast cancer and three general-purpose tasks for medical issues datasets.