The cold-start problem is a long-standing challenge in recommender systems due to the lack of user-item interactions, which significantly hurts the recommendation effect over new users and items. Recently, meta-learning based methods attempt to learn globally shared prior knowledge across all users, which can be rapidly adapted to new users and items with very few interactions. Though with significant performance improvement, the globally shared parameter may lead to local optimum. Besides, they are oblivious to the inherent information and feature interactions existing in the new users and items, which are critical in cold-start scenarios. In this paper, we propose a Task aligned Meta-learning based Augmented Graph (TMAG) to address cold-start recommendation. Specifically, a fine-grained task aligned constructor is proposed to cluster similar users and divide tasks for meta-learning, enabling consistent optimization direction. Besides, an augmented graph neural network with two graph enhanced approaches is designed to alleviate data sparsity and capture the high-order user-item interactions. We validate our approach on three real-world datasets in various cold-start scenarios, showing the superiority of TMAG over state-of-the-art methods for cold-start recommendation.
While sequential recommender systems achieve significant improvements on capturing user dynamics, we argue that sequential recommenders are vulnerable against substitution-based profile pollution attacks. To demonstrate our hypothesis, we propose a substitution-based adversarial attack algorithm, which modifies the input sequence by selecting certain vulnerable elements and substituting them with adversarial items. In both untargeted and targeted attack scenarios, we observe significant performance deterioration using the proposed profile pollution algorithm. Motivated by such observations, we design an efficient adversarial defense method called Dirichlet neighborhood sampling. Specifically, we sample item embeddings from a convex hull constructed by multi-hop neighbors to replace the original items in input sequences. During sampling, a Dirichlet distribution is used to approximate the probability distribution in the neighborhood such that the recommender learns to combat local perturbations. Additionally, we design an adversarial training method tailored for sequential recommender systems. In particular, we represent selected items with one-hot encodings and perform gradient ascent on the encodings to search for the worst case linear combination of item embeddings in training. As such, the embedding function learns robust item representations and the trained recommender is resistant to test-time adversarial examples. Extensive experiments show the effectiveness of both our attack and defense methods, which consistently outperform baselines by a significant margin across model architectures and datasets.
We present a unified method, termed Unicorn, that can simultaneously solve four tracking problems (SOT, MOT, VOS, MOTS) with a single network using the same model parameters. Due to the fragmented definitions of the object tracking problem itself, most existing trackers are developed to address a single or part of tasks and overspecialize on the characteristics of specific tasks. By contrast, Unicorn provides a unified solution, adopting the same input, backbone, embedding, and head across all tracking tasks. For the first time, we accomplish the great unification of the tracking network architecture and learning paradigm. Unicorn performs on-par or better than its task-specific counterparts in 8 tracking datasets, including LaSOT, TrackingNet, MOT17, BDD100K, DAVIS16-17, MOTS20, and BDD100K MOTS. We believe that Unicorn will serve as a solid step towards the general vision model. Code is available at https://github.com/MasterBin-IIAU/Unicorn.
Dominant trackers generate a fixed-size rectangular region based on the previous prediction or initial bounding box as the model input, i.e., search region. While this manner leads to improved tracking efficiency, a fixed-size search region lacks flexibility and is likely to fail in cases, e.g., fast motion and distractor interference. Trackers tend to lose the target object due to the limited search region or be interfered by distractors due to excessive search region. In this work, we propose a novel tracking paradigm, called Search Region Regulation Tracking (SRRT), which applies a proposed search region regulator to estimate an optimal search region dynamically for every frame. To adapt the object's appearance variation during tracking, we further propose a locking-state determined updating strategy for reference frame updating. Our SRRT framework is very concise without fancy design, yet achieves evident improvements on the baselines and competitive results with other state-of-the-art trackers on seven challenging benchmarks. On the large-scale LaSOT benchmark, our SRRT improves SiamRPN++ and TransT with the absolute gains of 4.6% and 3.1% in terms of AUC.
Masked Autoencoders (MAE) have shown great potentials in self-supervised pre-training for language and 2D image transformers. However, it still remains an open question on how to exploit masked autoencoding for learning 3D representations of irregular point clouds. In this paper, we propose Point-M2AE, a strong Multi-scale MAE pre-training framework for hierarchical self-supervised learning of 3D point clouds. Unlike the standard transformer in MAE, we modify the encoder and decoder into pyramid architectures to progressively model spatial geometries and capture both fine-grained and high-level semantics of 3D shapes. For the encoder that downsamples point tokens by stages, we design a multi-scale masking strategy to generate consistent visible regions across scales, and adopt a local spatial self-attention mechanism to focus on neighboring patterns. By multi-scale token propagation, the lightweight decoder gradually upsamples point tokens with complementary skip connections from the encoder, which further promotes the reconstruction from a global-to-local perspective. Extensive experiments demonstrate the state-of-the-art performance of Point-M2AE for 3D representation learning. With a frozen encoder after pre-training, Point-M2AE achieves 92.9% accuracy for linear SVM on ModelNet40, even surpassing some fully trained methods. By fine-tuning on downstream tasks, Point-M2AE achieves 86.43% accuracy on ScanObjectNN, +3.36% to the second-best, and largely benefits the few-shot classification, part segmentation and 3D object detection with the hierarchical pre-training scheme. Code will be available at https://github.com/ZrrSkywalker/Point-M2AE.
Unmanned aerial vehicles (UAV) have been widely used in various fields, and their invasion of security and privacy has aroused social concern. Several detection and tracking systems for UAVs have been introduced in recent years, but most of them are based on radio frequency, radar, and other media. We assume that the field of computer vision is mature enough to detect and track invading UAVs. Thus we propose a visible light mode dataset called Dalian University of Technology Anti-UAV dataset, DUT Anti-UAV for short. It contains a detection dataset with a total of 10,000 images and a tracking dataset with 20 videos that include short-term and long-term sequences. All frames and images are manually annotated precisely. We use this dataset to train several existing detection algorithms and evaluate the algorithms' performance. Several tracking methods are also tested on our tracking dataset. Furthermore, we propose a clear and simple tracking algorithm combined with detection that inherits the detector's high precision. Extensive experiments show that the tracking performance is improved considerably after fusing detection, thus providing a new attempt at UAV tracking using our dataset.The datasets and results are publicly available at: https://github.com/wangdongdut/DUT-Anti-UAV
Online advertising driven by auctions brings billions of dollars in revenue for social networking services and e-commerce platforms. GSP auction, which is simple and easy to understand for advertisers, has almost become the benchmark for ad auction mechanisms in the industry. However, the allocation stability of GSP depends on the separable CTR assumption, which means that GSP considers neither position-dependent externalities nor ad-dependent externalities in multi-slot scenario, leading to suboptimal performance. Some GSP-based deep auctions (e.g., DeepGSP, DNA) have attempted to upgrade GSP with deep neural networks, while only modeling local externalities and thus still suboptimal. On the other hand, although VCG-based multi-slot auctions (e.g., VCG, WVCG) take externalities into consideration, they lack an efficient balance of both revenue and social welfare. In this paper, we propose a novel auction named Neural Multi-slot Auction (NMA) to tackle the above-mentioned challenges. Specifically, we model the global externalities effectively with a context-aware list-wise prediction module to achieve better performance. We design a list-wise deep rank module to guarantee incentive compatibility in end-to-end learning. Furthermore, we propose an auxiliary loss for social welfare to effectively reduce the decline of social welfare while maximizing revenue. Experiment results on both offline large-scale datasets and online A/B tests demonstrate that NMA obtains higher revenue with balanced social welfare than other existing auction mechanisms (i.e., GSP, DNA, WVCG) in industrial practice, and we have successfully deployed NMA on Meituan food delivery platform.
The non-uniformly distributed nature of the 3D dynamic point cloud (DPC) brings significant challenges to its high-efficient inter-frame compression. This paper proposes a novel 3D sparse convolution-based Deep Dynamic Point Cloud Compression (D-DPCC) network to compensate and compress the DPC geometry with 3D motion estimation and motion compensation in the feature space. In the proposed D-DPCC network, we design a {\it Multi-scale Motion Fusion} (MMF) module to accurately estimate the 3D optical flow between the feature representations of adjacent point cloud frames. Specifically, we utilize a 3D sparse convolution-based encoder to obtain the latent representation for motion estimation in the feature space and introduce the proposed MMF module for fused 3D motion embedding. Besides, for motion compensation, we propose a 3D {\it Adaptively Weighted Interpolation} (3DAWI) algorithm with a penalty coefficient to adaptively decrease the impact of distant neighbors. We compress the motion embedding and the residual with a lossy autoencoder-based network. To our knowledge, this paper is the first work proposing an end-to-end deep dynamic point cloud compression framework. The experimental result shows that the proposed D-DPCC framework achieves an average 76\% BD-Rate (Bjontegaard Delta Rate) gains against state-of-the-art Video-based Point Cloud Compression (V-PCC) v13 in inter mode.
Granular metamaterials are a promising choice for the realization of mechanical computing devices. As preliminary evidence of this, we demonstrate here how to embed Boolean logic gates (AND and XOR) into a granular metamaterial by evolving where particular grains are placed in the material. Our results confirm the existence of gradients of increasing "AND-ness" and "XOR-ness" within the space of possible materials that can be followed by evolutionary search. We measure the computational functionality of a material by probing how it transforms bits encoded as vibrations with zero or non-zero amplitude. We compared the evolution of materials built from mass-contrasting particles and materials built from stiffness-contrasting particles, and found that the latter were more evolvable. We believe this work may pave the way toward evolutionary design of increasingly sophisticated, programmable, and computationally dense metamaterials with certain advantages over more traditional computational substrates.
Recent research showed that an autoencoder trained with speech of a single speaker, called exemplar autoencoder (eAE), can be used for any-to-one voice conversion (VC). Compared to large-scale many-to-many models such as AutoVC, the eAE model is easy and fast in training, and may recover more details of the target speaker. To ensure VC quality, the latent code should represent and only represent content information. However, this is not easy to attain for eAE as it is unaware of any speaker variation in model training. To tackle the problem, we propose a simple yet effective approach based on a cycle consistency loss. Specifically, we train eAEs of multiple speakers with a shared encoder, and meanwhile encourage the speech reconstructed from any speaker-specific decoder to get a consistent latent code as the original speech when cycled back and encoded again. Experiments conducted on the AISHELL-3 corpus showed that this new approach improved the baseline eAE consistently. The source code and examples are available at the project page: http://project.cslt.org/.