Alert button
Picture for Junnan Liu

Junnan Liu

Alert button

Perception-and-Regulation Network for Salient Object Detection

Jul 27, 2021
Jinchao Zhu, Xiaoyu Zhang, Xian Fang, Junnan Liu

Figure 1 for Perception-and-Regulation Network for Salient Object Detection
Figure 2 for Perception-and-Regulation Network for Salient Object Detection
Figure 3 for Perception-and-Regulation Network for Salient Object Detection
Figure 4 for Perception-and-Regulation Network for Salient Object Detection

Effective fusion of different types of features is the key to salient object detection. The majority of existing network structure design is based on the subjective experience of scholars and the process of feature fusion does not consider the relationship between the fused features and highest-level features. In this paper, we focus on the feature relationship and propose a novel global attention unit, which we term the "perception- and-regulation" (PR) block, that adaptively regulates the feature fusion process by explicitly modeling interdependencies between features. The perception part uses the structure of fully-connected layers in classification networks to learn the size and shape of objects. The regulation part selectively strengthens and weakens the features to be fused. An imitating eye observation module (IEO) is further employed for improving the global perception ability of the network. The imitation of foveal vision and peripheral vision enables IEO to scrutinize highly detailed objects and to organize the broad spatial scene to better segment objects. Sufficient experiments conducted on SOD datasets demonstrate that the proposed method performs favorably against 22 state-of-the-art methods.

Viaarxiv icon

Noised Consistency Training for Text Summarization

May 28, 2021
Junnan Liu, Qianren Mao, Bang Liu, Hao Peng, Hongdong Zhu, Jianxin Li

Figure 1 for Noised Consistency Training for Text Summarization
Figure 2 for Noised Consistency Training for Text Summarization
Figure 3 for Noised Consistency Training for Text Summarization
Figure 4 for Noised Consistency Training for Text Summarization

Neural abstractive summarization methods often require large quantities of labeled training data. However, labeling large amounts of summarization data is often prohibitive due to time, financial, and expertise constraints, which has limited the usefulness of summarization systems to practical applications. In this paper, we argue that this limitation can be overcome by a semi-supervised approach: consistency training which is to leverage large amounts of unlabeled data to improve the performance of supervised learning over a small corpus. The consistency regularization semi-supervised learning can regularize model predictions to be invariant to small noise applied to input articles. By adding noised unlabeled corpus to help regularize consistency training, this framework obtains comparative performance without using the full dataset. In particular, we have verified that leveraging large amounts of unlabeled data decently improves the performance of supervised learning over an insufficient labeled dataset.

Viaarxiv icon