Alert button
Picture for Guodong Xu

Guodong Xu

Alert button

Mind the Gap in Distilling StyleGANs

Aug 18, 2022
Guodong Xu, Yuenan Hou, Ziwei Liu, Chen Change Loy

Figure 1 for Mind the Gap in Distilling StyleGANs
Figure 2 for Mind the Gap in Distilling StyleGANs
Figure 3 for Mind the Gap in Distilling StyleGANs
Figure 4 for Mind the Gap in Distilling StyleGANs

StyleGAN family is one of the most popular Generative Adversarial Networks (GANs) for unconditional generation. Despite its impressive performance, its high demand on storage and computation impedes their deployment on resource-constrained devices. This paper provides a comprehensive study of distilling from the popular StyleGAN-like architecture. Our key insight is that the main challenge of StyleGAN distillation lies in the output discrepancy issue, where the teacher and student model yield different outputs given the same input latent code. Standard knowledge distillation losses typically fail under this heterogeneous distillation scenario. We conduct thorough analysis about the reasons and effects of this discrepancy issue, and identify that the mapping network plays a vital role in determining semantic information of generated images. Based on this finding, we propose a novel initialization strategy for the student model, which can ensure the output consistency to the maximum extent. To further enhance the semantic consistency between the teacher and student model, we present a latent-direction-based distillation loss that preserves the semantic relations in latent space. Extensive experiments demonstrate the effectiveness of our approach in distilling StyleGAN2 and StyleGAN3, outperforming existing GAN distillation methods by a large margin.

* Accepted by ECCV2022 
Viaarxiv icon

Towards Evaluating and Training Verifiably Robust Neural Networks

Apr 05, 2021
Zhaoyang Lyu, Minghao Guo, Tong Wu, Guodong Xu, Kehuan Zhang, Dahua Lin

Figure 1 for Towards Evaluating and Training Verifiably Robust Neural Networks
Figure 2 for Towards Evaluating and Training Verifiably Robust Neural Networks
Figure 3 for Towards Evaluating and Training Verifiably Robust Neural Networks
Figure 4 for Towards Evaluating and Training Verifiably Robust Neural Networks

Recent works have shown that interval bound propagation (IBP) can be used to train verifiably robust neural networks. Reseachers observe an intriguing phenomenon on these IBP trained networks: CROWN, a bounding method based on tight linear relaxation, often gives very loose bounds on these networks. We also observe that most neurons become dead during the IBP training process, which could hurt the representation capability of the network. In this paper, we study the relationship between IBP and CROWN, and prove that CROWN is always tighter than IBP when choosing appropriate bounding lines. We further propose a relaxed version of CROWN, linear bound propagation (LBP), that can be used to verify large networks to obtain lower verified errors than IBP. We also design a new activation function, parameterized ramp function (ParamRamp), which has more diversity of neuron status than ReLU. We conduct extensive experiments on MNIST, CIFAR-10 and Tiny-ImageNet with ParamRamp activation and achieve state-of-the-art verified robustness. Code and the appendix are available at https://github.com/ZhaoyangLyu/VerifiablyRobustNN.

* Accepted to CVPR 2021 (Oral) 
Viaarxiv icon

X-view: Non-egocentric Multi-View 3D Object Detector

Mar 24, 2021
Liang Xie, Guodong Xu, Deng Cai, Xiaofei He

Figure 1 for X-view: Non-egocentric Multi-View 3D Object Detector
Figure 2 for X-view: Non-egocentric Multi-View 3D Object Detector
Figure 3 for X-view: Non-egocentric Multi-View 3D Object Detector
Figure 4 for X-view: Non-egocentric Multi-View 3D Object Detector

3D object detection algorithms for autonomous driving reason about 3D obstacles either from 3D birds-eye view or perspective view or both. Recent works attempt to improve the detection performance via mining and fusing from multiple egocentric views. Although the egocentric perspective view alleviates some weaknesses of the birds-eye view, the sectored grid partition becomes so coarse in the distance that the targets and surrounding context mix together, which makes the features less discriminative. In this paper, we generalize the research on 3D multi-view learning and propose a novel multi-view-based 3D detection method, named X-view, to overcome the drawbacks of the multi-view methods. Specifically, X-view breaks through the traditional limitation about the perspective view whose original point must be consistent with the 3D Cartesian coordinate. X-view is designed as a general paradigm that can be applied on almost any 3D detectors based on LiDAR with only little increment of running time, no matter it is voxel/grid-based or raw-point-based. We conduct experiments on KITTI and NuScenes datasets to demonstrate the robustness and effectiveness of our proposed X-view. The results show that X-view obtains consistent improvements when combined with four mainstream state-of-the-art 3D methods: SECOND, PointRCNN, Part-A^2, and PV-RCNN.

* 9 pages, 5 figures 
Viaarxiv icon

SparsePoint: Fully End-to-End Sparse 3D Object Detector

Mar 18, 2021
Zili Liu, Guodong Xu, Honghui Yang, Haifeng Liu, Deng Cai

Figure 1 for SparsePoint: Fully End-to-End Sparse 3D Object Detector
Figure 2 for SparsePoint: Fully End-to-End Sparse 3D Object Detector
Figure 3 for SparsePoint: Fully End-to-End Sparse 3D Object Detector
Figure 4 for SparsePoint: Fully End-to-End Sparse 3D Object Detector

Object detectors based on sparse object proposals have recently been proven to be successful in the 2D domain, which makes it possible to establish a fully end-to-end detector without time-consuming post-processing. This development is also attractive for 3D object detectors. However, considering the remarkably larger search space in the 3D domain, whether it is feasible to adopt the sparse method in the 3D object detection setting is still an open question. In this paper, we propose SparsePoint, the first sparse method for 3D object detection. Our SparsePoint adopts a number of learnable proposals to encode most likely potential positions of 3D objects and a foreground embedding to encode shared semantic features of all objects. Besides, with the attention module to provide object-level interaction for redundant proposal removal and Hungarian algorithm to supply one-one label assignment, our method can produce sparse and accurate predictions. SparsePoint sets a new state-of-the-art on four public datasets, including ScanNetV2, SUN RGB-D, S3DIS, and Matterport3D. Our code will be publicly available soon.

Viaarxiv icon

Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup

Dec 17, 2020
Guodong Xu, Ziwei Liu, Chen Change Loy

Figure 1 for Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Figure 2 for Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Figure 3 for Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Figure 4 for Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup

Knowledge distillation, which involves extracting the "dark knowledge" from a teacher network to guide the learning of a student network, has emerged as an essential technique for model compression and transfer learning. Unlike previous works that focus on the accuracy of student network, here we study a little-explored but important question, i.e., knowledge distillation efficiency. Our goal is to achieve a performance comparable to conventional knowledge distillation with a lower computation cost during training. We show that the UNcertainty-aware mIXup (UNIX) can serve as a clean yet effective solution. The uncertainty sampling strategy is used to evaluate the informativeness of each training sample. Adaptive mixup is applied to uncertain samples to compact knowledge. We further show that the redundancy of conventional knowledge distillation lies in the excessive learning of easy samples. By combining uncertainty and mixup, our approach reduces the redundancy and makes better use of each query to the teacher network. We validate our approach on CIFAR100 and ImageNet. Notably, with only 79% computation cost, we outperform conventional knowledge distillation on CIFAR100 and achieve a comparable result on ImageNet.

* The code is available at: https://github.com/xuguodong03/UNIXKD 
Viaarxiv icon

Knowledge Distillation Meets Self-Supervision

Jul 13, 2020
Guodong Xu, Ziwei Liu, Xiaoxiao Li, Chen Change Loy

Figure 1 for Knowledge Distillation Meets Self-Supervision
Figure 2 for Knowledge Distillation Meets Self-Supervision
Figure 3 for Knowledge Distillation Meets Self-Supervision
Figure 4 for Knowledge Distillation Meets Self-Supervision

Knowledge distillation, which involves extracting the "dark knowledge" from a teacher network to guide the learning of a student network, has emerged as an important technique for model compression and transfer learning. Unlike previous works that exploit architecture-specific cues such as activation and attention for distillation, here we wish to explore a more general and model-agnostic approach for extracting "richer dark knowledge" from the pre-trained teacher model. We show that the seemingly different self-supervision task can serve as a simple yet powerful solution. For example, when performing contrastive learning between transformed entities, the noisy predictions of the teacher network reflect its intrinsic composition of semantic and pose information. By exploiting the similarity between those self-supervision signals as an auxiliary task, one can effectively transfer the hidden information from the teacher to the student. In this paper, we discuss practical ways to exploit those noisy self-supervision signals with selective transfer for distillation. We further show that self-supervision signals improve conventional distillation with substantial gains under few-shot and noisy-label scenarios. Given the richer knowledge mined from self-supervision, our knowledge distillation approach achieves state-of-the-art performance on standard benchmarks, i.e., CIFAR100 and ImageNet, under both similar-architecture and cross-architecture settings. The advantage is even more pronounced under the cross-architecture setting, where our method outperforms the state of the art CRD by an average of 2.3% in accuracy rate on CIFAR100 across six different teacher-student pairs.

* To appear in ECCV 2020. Code is available at: https://github.com/xuguodong03/SSKD 
Viaarxiv icon

A Local-to-Global Approach to Multi-modal Movie Scene Segmentation

Apr 28, 2020
Anyi Rao, Linning Xu, Yu Xiong, Guodong Xu, Qingqiu Huang, Bolei Zhou, Dahua Lin

Figure 1 for A Local-to-Global Approach to Multi-modal Movie Scene Segmentation
Figure 2 for A Local-to-Global Approach to Multi-modal Movie Scene Segmentation
Figure 3 for A Local-to-Global Approach to Multi-modal Movie Scene Segmentation
Figure 4 for A Local-to-Global Approach to Multi-modal Movie Scene Segmentation

Scene, as the crucial unit of storytelling in movies, contains complex activities of actors and their interactions in a physical environment. Identifying the composition of scenes serves as a critical step towards semantic understanding of movies. This is very challenging -- compared to the videos studied in conventional vision problems, e.g. action recognition, as scenes in movies usually contain much richer temporal structures and more complex semantic information. Towards this goal, we scale up the scene segmentation task by building a large-scale video dataset MovieScenes, which contains 21K annotated scene segments from 150 movies. We further propose a local-to-global scene segmentation framework, which integrates multi-modal information across three levels, i.e. clip, segment, and movie. This framework is able to distill complex semantics from hierarchical temporal structures over a long movie, providing top-down guidance for scene segmentation. Our experiments show that the proposed network is able to segment a movie into scenes with high accuracy, consistently outperforming previous methods. We also found that pretraining on our MovieScenes can bring significant improvements to the existing approaches.

* CVPR2020. Project page: https://anyirao.com/projects/SceneSeg.html 
Viaarxiv icon