Alert button
Picture for Duo Li

Duo Li

Alert button

E2-AEN: End-to-End Incremental Learning with Adaptively Expandable Network

Jul 14, 2022
Guimei Cao, Zhanzhan Cheng, Yunlu Xu, Duo Li, Shiliang Pu, Yi Niu, Fei Wu

Figure 1 for E2-AEN: End-to-End Incremental Learning with Adaptively Expandable Network
Figure 2 for E2-AEN: End-to-End Incremental Learning with Adaptively Expandable Network
Figure 3 for E2-AEN: End-to-End Incremental Learning with Adaptively Expandable Network
Figure 4 for E2-AEN: End-to-End Incremental Learning with Adaptively Expandable Network

Expandable networks have demonstrated their advantages in dealing with catastrophic forgetting problem in incremental learning. Considering that different tasks may need different structures, recent methods design dynamic structures adapted to different tasks via sophisticated skills. Their routine is to search expandable structures first and then train on the new tasks, which, however, breaks tasks into multiple training stages, leading to suboptimal or overmuch computational cost. In this paper, we propose an end-to-end trainable adaptively expandable network named E2-AEN, which dynamically generates lightweight structures for new tasks without any accuracy drop in previous tasks. Specifically, the network contains a serial of powerful feature adapters for augmenting the previously learned representations to new tasks, and avoiding task interference. These adapters are controlled via an adaptive gate-based pruning strategy which decides whether the expanded structures can be pruned, making the network structure dynamically changeable according to the complexity of the new tasks. Moreover, we introduce a novel sparsity-activation regularization to encourage the model to learn discriminative features with limited parameters. E2-AEN reduces cost and can be built upon any feed-forward architectures in an end-to-end manner. Extensive experiments on both classification (i.e., CIFAR and VDD) and detection (i.e., COCO, VOC and ICCV2021 SSLAD challenge) benchmarks demonstrate the effectiveness of the proposed method, which achieves the new remarkable results.

Viaarxiv icon

Technical Report for ICCV 2021 Challenge SSLAD-Track3B: Transformers Are Better Continual Learners

Jan 13, 2022
Duo Li, Guimei Cao, Yunlu Xu, Zhanzhan Cheng, Yi Niu

Figure 1 for Technical Report for ICCV 2021 Challenge SSLAD-Track3B: Transformers Are Better Continual Learners
Figure 2 for Technical Report for ICCV 2021 Challenge SSLAD-Track3B: Transformers Are Better Continual Learners

In the SSLAD-Track 3B challenge on continual learning, we propose the method of COntinual Learning with Transformer (COLT). We find that transformers suffer less from catastrophic forgetting compared to convolutional neural network. The major principle of our method is to equip the transformer based feature extractor with old knowledge distillation and head expanding strategies to compete catastrophic forgetting. In this report, we first introduce the overall framework of continual learning for object detection. Then, we analyse the key elements' effect on withstanding catastrophic forgetting in our solution. Our method achieves 70.78 mAP on the SSLAD-Track 3B challenge test set.

* Rank 1st on ICCV2021 SSLAD-Track 3B 
Viaarxiv icon

Unifying Nonlocal Blocks for Neural Networks

Aug 17, 2021
Lei Zhu, Qi She, Duo Li, Yanye Lu, Xuejing Kang, Jie Hu, Changhu Wang

Figure 1 for Unifying Nonlocal Blocks for Neural Networks
Figure 2 for Unifying Nonlocal Blocks for Neural Networks
Figure 3 for Unifying Nonlocal Blocks for Neural Networks
Figure 4 for Unifying Nonlocal Blocks for Neural Networks

The nonlocal-based blocks are designed for capturing long-range spatial-temporal dependencies in computer vision tasks. Although having shown excellent performance, they still lack the mechanism to encode the rich, structured information among elements in an image or video. In this paper, to theoretically analyze the property of these nonlocal-based blocks, we provide a new perspective to interpret them, where we view them as a set of graph filters generated on a fully-connected graph. Specifically, when choosing the Chebyshev graph filter, a unified formulation can be derived for explaining and analyzing the existing nonlocal-based blocks (e.g., nonlocal block, nonlocal stage, double attention block). Furthermore, by concerning the property of spectral, we propose an efficient and robust spectral nonlocal block, which can be more robust and flexible to catch long-range dependencies when inserted into deep neural networks than the existing nonlocal blocks. Experimental results demonstrate the clear-cut improvements and practical applicabilities of our method on image classification, action recognition, semantic segmentation, and person re-identification tasks.

* Accept by ICCV 2021 Conference 
Viaarxiv icon

m-RevNet: Deep Reversible Neural Networks with Momentum

Aug 16, 2021
Duo Li, Shang-Hua Gao

Figure 1 for m-RevNet: Deep Reversible Neural Networks with Momentum
Figure 2 for m-RevNet: Deep Reversible Neural Networks with Momentum
Figure 3 for m-RevNet: Deep Reversible Neural Networks with Momentum
Figure 4 for m-RevNet: Deep Reversible Neural Networks with Momentum

In recent years, the connections between deep residual networks and first-order Ordinary Differential Equations (ODEs) have been disclosed. In this work, we further bridge the deep neural architecture design with the second-order ODEs and propose a novel reversible neural network, termed as m-RevNet, that is characterized by inserting momentum update to residual blocks. The reversible property allows us to perform backward pass without access to activation values of the forward pass, greatly relieving the storage burden during training. Furthermore, the theoretical foundation based on second-order ODEs grants m-RevNet with stronger representational power than vanilla residual networks, which potentially explains its performance gains. For certain learning scenarios, we analytically and empirically reveal that our m-RevNet succeeds while standard ResNet fails. Comprehensive experiments on various image classification and semantic segmentation benchmarks demonstrate the superiority of our m-RevNet over ResNet, concerning both memory efficiency and recognition performance.

* idea overlapped with existing work 
Viaarxiv icon

Potential Convolution: Embedding Point Clouds into Potential Fields

Apr 05, 2021
Dengsheng Chen, Haowen Deng, Jun Li, Duo Li, Yao Duan, Kai Xu

Figure 1 for Potential Convolution: Embedding Point Clouds into Potential Fields
Figure 2 for Potential Convolution: Embedding Point Clouds into Potential Fields
Figure 3 for Potential Convolution: Embedding Point Clouds into Potential Fields
Figure 4 for Potential Convolution: Embedding Point Clouds into Potential Fields

Recently, various convolutions based on continuous or discrete kernels for point cloud processing have been widely studied, and achieve impressive performance in many applications, such as shape classification, scene segmentation and so on. However, they still suffer from some drawbacks. For continuous kernels, the inaccurate estimation of the kernel weights constitutes a bottleneck for further improving the performance; while for discrete ones, the kernels represented as the points located in the 3D space are lack of rich geometry information. In this work, rather than defining a continuous or discrete kernel, we directly embed convolutional kernels into the learnable potential fields, giving rise to potential convolution. It is convenient for us to define various potential functions for potential convolution which can generalize well to a wide range of tasks. Specifically, we provide two simple yet effective potential functions via point-wise convolution operations. Comprehensive experiments demonstrate the effectiveness of our method, which achieves superior performance on the popular 3D shape classification and scene segmentation benchmarks compared with other state-of-the-art point convolution methods.

Viaarxiv icon

Learning the Superpixel in a Non-iterative and Lifelong Manner

Mar 19, 2021
Lei Zhu, Qi She, Bin Zhang, Yanye Lu, Zhilin Lu, Duo Li, Jie Hu

Figure 1 for Learning the Superpixel in a Non-iterative and Lifelong Manner
Figure 2 for Learning the Superpixel in a Non-iterative and Lifelong Manner
Figure 3 for Learning the Superpixel in a Non-iterative and Lifelong Manner
Figure 4 for Learning the Superpixel in a Non-iterative and Lifelong Manner

Superpixel is generated by automatically clustering pixels in an image into hundreds of compact partitions, which is widely used to perceive the object contours for its excellent contour adherence. Although some works use the Convolution Neural Network (CNN) to generate high-quality superpixel, we challenge the design principles of these networks, specifically for their dependence on manual labels and excess computation resources, which limits their flexibility compared with the traditional unsupervised segmentation methods. We target at redefining the CNN-based superpixel segmentation as a lifelong clustering task and propose an unsupervised CNN-based method called LNS-Net. The LNS-Net can learn superpixel in a non-iterative and lifelong manner without any manual labels. Specifically, a lightweight feature embedder is proposed for LNS-Net to efficiently generate the cluster-friendly features. With those features, seed nodes can be automatically assigned to cluster pixels in a non-iterative way. Additionally, our LNS-Net can adapt the sequentially lifelong learning by rescaling the gradient of weight based on both channel and spatial context to avoid overfitting. Experiments show that the proposed LNS-Net achieves significantly better performance on three benchmarks with nearly ten times lower complexity compared with other state-of-the-art methods.

* Accept by CVPR2021 
Viaarxiv icon

PointFlow: Flowing Semantics Through Points for Aerial Image Segmentation

Mar 11, 2021
Xiangtai Li, Hao He, Xia Li, Duo Li, Guangliang Cheng, Jianping Shi, Lubin Weng, Yunhai Tong, Zhouchen Lin

Figure 1 for PointFlow: Flowing Semantics Through Points for Aerial Image Segmentation
Figure 2 for PointFlow: Flowing Semantics Through Points for Aerial Image Segmentation
Figure 3 for PointFlow: Flowing Semantics Through Points for Aerial Image Segmentation
Figure 4 for PointFlow: Flowing Semantics Through Points for Aerial Image Segmentation

Aerial Image Segmentation is a particular semantic segmentation problem and has several challenging characteristics that general semantic segmentation does not have. There are two critical issues: The one is an extremely foreground-background imbalanced distribution, and the other is multiple small objects along with the complex background. Such problems make the recent dense affinity context modeling perform poorly even compared with baselines due to over-introduced background context. To handle these problems, we propose a point-wise affinity propagation module based on the Feature Pyramid Network (FPN) framework, named PointFlow. Rather than dense affinity learning, a sparse affinity map is generated upon selected points between the adjacent features, which reduces the noise introduced by the background while keeping efficiency. In particular, we design a dual point matcher to select points from the salient area and object boundaries, respectively. Experimental results on three different aerial segmentation datasets suggest that the proposed method is more effective and efficient than state-of-the-art general semantic segmentation methods. Especially, our methods achieve the best speed and accuracy trade-off on three aerial benchmarks. Further experiments on three general semantic segmentation datasets prove the generality of our method. Code will be provided in (https: //github.com/lxtGH/PFSegNets).

* accepted by CVPR2021 
Viaarxiv icon