Alert button
Picture for Zipei Chen

Zipei Chen

Alert button

Video Shadow Detection via Spatio-Temporal Interpolation Consistency Training

Jun 17, 2022
Xiao Lu, Yihong Cao, Sheng Liu, Chengjiang Long, Zipei Chen, Xuanyu Zhou, Yimin Yang, Chunxia Xiao

Figure 1 for Video Shadow Detection via Spatio-Temporal Interpolation Consistency Training
Figure 2 for Video Shadow Detection via Spatio-Temporal Interpolation Consistency Training
Figure 3 for Video Shadow Detection via Spatio-Temporal Interpolation Consistency Training
Figure 4 for Video Shadow Detection via Spatio-Temporal Interpolation Consistency Training

It is challenging to annotate large-scale datasets for supervised video shadow detection methods. Using a model trained on labeled images to the video frames directly may lead to high generalization error and temporal inconsistent results. In this paper, we address these challenges by proposing a Spatio-Temporal Interpolation Consistency Training (STICT) framework to rationally feed the unlabeled video frames together with the labeled images into an image shadow detection network training. Specifically, we propose the Spatial and Temporal ICT, in which we define two new interpolation schemes, \textit{i.e.}, the spatial interpolation and the temporal interpolation. We then derive the spatial and temporal interpolation consistency constraints accordingly for enhancing generalization in the pixel-wise classification task and for encouraging temporal consistent predictions, respectively. In addition, we design a Scale-Aware Network for multi-scale shadow knowledge learning in images, and propose a scale-consistency constraint to minimize the discrepancy among the predictions at different scales. Our proposed approach is extensively validated on the ViSha dataset and a self-annotated dataset. Experimental results show that, even without video labels, our approach is better than most state of the art supervised, semi-supervised or unsupervised image/video shadow detection methods and other methods in related tasks. Code and dataset are available at \url{https://github.com/yihong-97/STICT}.

* Accepted in CVPR2022 
Viaarxiv icon

CANet: A Context-Aware Network for Shadow Removal

Aug 23, 2021
Zipei Chen, Chengjiang Long, Ling Zhang, Chunxia Xiao

Figure 1 for CANet: A Context-Aware Network for Shadow Removal
Figure 2 for CANet: A Context-Aware Network for Shadow Removal
Figure 3 for CANet: A Context-Aware Network for Shadow Removal
Figure 4 for CANet: A Context-Aware Network for Shadow Removal

In this paper, we propose a novel two-stage context-aware network named CANet for shadow removal, in which the contextual information from non-shadow regions is transferred to shadow regions at the embedded feature spaces. At Stage-I, we propose a contextual patch matching (CPM) module to generate a set of potential matching pairs of shadow and non-shadow patches. Combined with the potential contextual relationships between shadow and non-shadow regions, our well-designed contextual feature transfer (CFT) mechanism can transfer contextual information from non-shadow to shadow regions at different scales. With the reconstructed feature maps, we remove shadows at L and A/B channels separately. At Stage-II, we use an encoder-decoder to refine current results and generate the final shadow removal results. We evaluate our proposed CANet on two benchmark datasets and some real-world shadow images with complex scenes. Extensive experimental results strongly demonstrate the efficacy of our proposed CANet and exhibit superior performance to state-of-the-arts.

* This paper was accepted to the IEEE International Conference on Computer Vision (ICCV), Montreal, Canada, Oct 11-17, 2021 
Viaarxiv icon