We propose HOI Transformer to tackle human object interaction (HOI) detection in an end-to-end manner. Current approaches either decouple HOI task into separated stages of object detection and interaction classification or introduce surrogate interaction problem. In contrast, our method, named HOI Transformer, streamlines the HOI pipeline by eliminating the need for many hand-designed components. HOI Transformer reasons about the relations of objects and humans from global image context and directly predicts HOI instances in parallel. A quintuple matching loss is introduced to force HOI predictions in a unified way. Our method is conceptually much simpler and demonstrates improved accuracy. Without bells and whistles, HOI Transformer achieves $26.61\% $ $ AP $ on HICO-DET and $52.9\%$ $AP_{role}$ on V-COCO, surpassing previous methods with the advantage of being much simpler. We hope our approach will serve as a simple and effective alternative for HOI tasks. Code is available at https://github.com/bbepoch/HoiTransformer .
Although deep learning (DL) has received much attention in accelerated MRI, recent studies suggest small perturbations may lead to instabilities in DL-based reconstructions, leading to concern for their clinical application. However, these works focus on single-coil acquisitions, which is not practical. We investigate instabilities caused by small adversarial attacks for multi-coil acquisitions. Our results suggest that, parallel imaging and multi-coil CS exhibit considerable instabilities against small adversarial perturbations.
Traditional industrial recommenders are usually trained on a single business domain and then serve for this domain. In large commercial platforms, however, it is often the case that the recommenders need to make click-through rate (CTR) predictions for multiple business domains. Different domains have overlapping user groups and items, thus exist commonalities. Since the specific user group may be different and the user behaviors may change within a specific domain, different domains also have distinctions. The distinctions result in different domain-specific data distributions, which makes it hard for a single shared model to work well on all domains. To address the problem, we present Star Topology Adaptive Recommender (STAR), where one model is learned to serve all domains effectively. Concretely, STAR has the star topology, which consists of the shared centered parameters and domain-specific parameters. The shared parameters are used to learn commonalities of all domains and the domain-specific parameters capture domain distinction for more refined prediction. Given requests from different domains, STAR can adapt its parameters conditioned on the domain. The experimental result from production data validates the superiority of the proposed STAR model. Up to now, STAR has been deployed in the display advertising system of Alibaba, obtaining averaging 8.0% improvement on CTR and 6.0% on RPM (Revenue Per Mille).
A large fraction of major waterways have dams influencing streamflow, which must be accounted for in large-scale hydrologic modeling. However, daily streamflow prediction for basins with dams is challenging for various modeling approaches, especially at large scales. Here we took a divide-and-conquer approach to examine which types of basins could be well represented by a long short-term memory (LSTM) deep learning model using only readily-available information. We analyzed data from 3557 basins (83% dammed) over the contiguous United States and noted strong impacts of reservoir purposes, capacity-to-runoff ratio (dor), and diversion on streamflow on streamflow modeling. Surprisingly, while the LSTM model trained on a widely-used reference-basin dataset performed poorly for more non-reference basins, the model trained on the whole dataset presented a median test Nash-Sutcliffe efficiency coefficient (NSE) of 0.74, reaching benchmark-level performance. The zero-dor, small-dor, and large-dor basins were found to have distinct behaviors, so migrating models between categories yielded catastrophic results. However, training with pooled data from different sets yielded optimal median NSEs of 0.73, 0.78, and 0.71 for these groups, respectively, showing noticeable advantages over existing models. These results support a coherent, mixed modeling strategy where smaller dams are modeled as part of rainfall-runoff processes, but dammed basins must not be treated as reference ones and must be included in the training set; then, large-dor reservoirs can be represented explicitly and future work should examine modeling reservoirs for fire protection and irrigation, followed by those for hydroelectric power generation, and flood control, etc.
Although deep convolutional neural networks (DCNNs) have achieved significant accuracy in skin lesion classification comparable or even superior to those of dermatologists, practical implementation of these models for skin cancer screening in low resource settings is hindered by their limitations in computational cost and training dataset. To overcome these limitations, we propose a low-cost and high-performance data augmentation strategy that includes two consecutive stages of augmentation search and network search. At the augmentation search stage, the augmentation strategy is optimized in the search space of Low-Cost-Augment (LCA) under the criteria of balanced accuracy (BACC) with 5-fold cross validation. At the network search stage, the DCNNs are fine-tuned with the full training set in order to select the model with the highest BACC. The efficiency of the proposed data augmentation strategy is verified on the HAM10000 dataset using EfficientNets as a baseline. With the proposed strategy, we are able to reduce the search space to 60 and achieve a high BACC of 0.853 by using a single DCNN model without external database, suitable to be implemented in mobile devices for DCNN-based skin lesion detection in low resource settings.
Image co-segmentation is an active computer vision task which aims to segment the common objects in a set of images. Recently, researchers design various learning-based algorithms to handle the co-segmentation task. The main difficulty in this task is how to effectively transfer information between images to infer the common object regions. In this paper, we present CycleSegNet, a novel framework for the co-segmentation task. Our network design has two key components: a region correspondence module which is the basic operation for exchanging information between local image regions, and a cycle refinement module which utilizes ConvLSTMs to progressively update image embeddings and exchange information in a cycle manner. Experiment results on four popular benchmark datasets -- PASCAL VOC dataset, MSRC dataset, Internet dataset and iCoseg dataset demonstrate that our proposed method significantly outperforms the existing networks and achieves new state-of-the-art performance.
Point cloud segmentation is a fundamental visual understanding task in 3D vision. A fully supervised point cloud segmentation network often requires a large amount of data with point-wise annotations, which is expensive to obtain. In this work, we present the Compositional Prototype Network that can undertake point cloud segmentation with only a few labeled training data. Inspired by the few-shot learning literature in images, our network directly transfers label information from the limited training data to unlabeled test data for prediction. The network decomposes the representations of complex point cloud data into a set of local regional representations and utilizes them to calculate the compositional prototypes of a visual concept. Our network includes a key Multi-View Comparison Component that exploits the redundant views of the support set. To evaluate the proposed method, we create a new segmentation benchmark dataset, ScanNet-$6^i$, which is built upon ScanNet dataset. Extensive experiments show that our method outperforms baselines with a significant advantage. Moreover, when we use our network to handle the long-tail problem in a fully supervised point cloud segmentation dataset, it can also effectively boost the performance of the few-shot classes.
Federated learning (FL) is a new paradigm for large-scale learning tasks across mobile devices. However, practical FL deployment over resource constrained mobile devices confronts multiple challenges. For example, it is not clear how to establish an effective wireless network architecture to support FL over mobile devices. Besides, as modern machine learning models are more and more complex, the local on-device training/intermediate model update in FL is becoming too power hungry/radio resource intensive for mobile devices to afford. To address those challenges, in this paper, we try to bridge another recent surging technology, 5G, with FL, and develop a wireless transmission and weight quantization co-design for energy efficient FL over heterogeneous 5G mobile devices. Briefly, the 5G featured high data rate helps to relieve the severe communication concern, and the multi-access edge computing (MEC) in 5G provides a perfect network architecture to support FL. Under MEC architecture, we develop flexible weight quantization schemes to facilitate the on-device local training over heterogeneous 5G mobile devices. Observed the fact that the energy consumption of local computing is comparable to that of the model updates via 5G transmissions, we formulate the energy efficient FL problem into a mixed-integer programming problem to elaborately determine the quantization strategies and allocate the wireless bandwidth for heterogeneous 5G mobile devices. The goal is to minimize the overall FL energy consumption (computing + 5G transmissions) over 5G mobile devices while guaranteeing learning performance and training latency. Generalized Benders' Decomposition is applied to develop feasible solutions and extensive simulations are conducted to verify the effectiveness of the proposed scheme.
LiDAR based 3D object detectors typically need a large amount of detailed-labeled point cloud data for training, but these detailed labels are commonly expensive to acquire. In this paper, we propose a manual-label free 3D detection algorithm that leverages the CARLA simulator to generate a large amount of self-labeled training samples and introduces a novel Domain Adaptive VoxelNet (DA-VoxelNet) that can cross the distribution gap from the synthetic data to the real scenario. The self-labeled training samples are generated by a set of high quality 3D models embedded in a CARLA simulator and a proposed LiDAR-guided sampling algorithm. Then a DA-VoxelNet that integrates both a sample-level DA module and an anchor-level DA module is proposed to enable the detector trained by the synthetic data to adapt to real scenario. Experimental results show that the proposed unsupervised DA 3D detector on KITTI evaluation set can achieve 76.66% and 56.64% mAP on BEV mode and 3D mode respectively. The results reveal a promising perspective of training a LIDAR-based 3D detector without any hand-tagged label.
Inspired by the success of deep learning, recent industrial Click-Through Rate (CTR) prediction models have made the transition from traditional shallow approaches to deep approaches. Deep Neural Networks (DNNs) are known for its ability to learn non-linear interactions from raw feature automatically, however, the non-linear feature interaction is learned in an implicit manner. The non-linear interaction may be hard to capture and explicitly model the \textit{co-action} of raw feature is beneficial for CTR prediction. \textit{Co-action} refers to the collective effects of features toward final prediction. In this paper, we argue that current CTR models do not fully explore the potential of feature co-action. We conduct experiments and show that the effect of feature co-action is underestimated seriously. Motivated by our observation, we propose feature Co-Action Network (CAN) to explore the potential of feature co-action. The proposed model can efficiently and effectively capture the feature co-action, which improves the model performance while reduce the storage and computation consumption. Experiment results on public and industrial datasets show that CAN outperforms state-of-the-art CTR models by a large margin. Up to now, CAN has been deployed in the Alibaba display advertisement system, obtaining averaging 12\% improvement on CTR and 8\% on RPM.