Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Changlin Li

Dynamic Slimmable Denoising Network


Oct 17, 2021
Zutao Jiang, Changlin Li, Xiaojun Chang, Jihua Zhu, Yi Yang

* 11 pages 

  Access Paper or Ask Questions

DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and Transformers


Sep 21, 2021
Changlin Li, Guangrun Wang, Bing Wang, Xiaodan Liang, Zhihui Li, Xiaojun Chang

* Extension of the CVPR 2021 oral paper (https://openaccess.thecvf.com/content/CVPR2021/html/Li_Dynamic_Slimmable_Network_CVPR_2021_paper.html

  Access Paper or Ask Questions

Pi-NAS: Improving Neural Architecture Search by Reducing Supernet Training Consistency Shift


Aug 22, 2021
Jiefeng Peng, Jiqi Zhang, Changlin Li, Guangrun Wang, Xiaodan Liang, Liang Lin

* Accepted to ICCV 2021 

  Access Paper or Ask Questions

Joint Depth and Normal Estimation from Real-world Time-of-flight Raw Data


Aug 08, 2021
Rongrong Gao, Na Fan, Changlin Li, Wentao Liu, Qifeng Chen

* IROS 2021 

  Access Paper or Ask Questions

BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search


Mar 24, 2021
Changlin Li, Tao Tang, Guangrun Wang, Jiefeng Peng, Bing Wang, Xiaodan Liang, Xiaojun Chang

* Our searched hybrid CNN-transformer models achieve up to 82.2% accuracy on ImageNet, surpassing EfficientNet by 2.1% with comparable compute time 

  Access Paper or Ask Questions

Dynamic Slimmable Network


Mar 24, 2021
Changlin Li, Guangrun Wang, Bing Wang, Xiaodan Liang, Zhihui Li, Xiaojun Chang

* Accepted to CVPR 2021 as an Oral Presentation 

  Access Paper or Ask Questions

Density Map Guided Object Detection in Aerial Images


Apr 12, 2020
Changlin Li, Taojiannan Yang, Sijie Zhu, Chen Chen, Shanyue Guan

* CVPR 2020 EarthVision Workshop 

  Access Paper or Ask Questions

Blockwisely Supervised Neural Architecture Search with Knowledge Distillation


Nov 29, 2019
Changlin Li, Jiefeng Peng, Liuchun Yuan, Guangrun Wang, Xiaodan Liang, Liang Lin, Xiaojun Chang

* We achieve a state-of-the-art 78.4% top-1 accuracy on ImageNet in a mobile setting, which is about a 2.1% gain over EfficientNet-B0 

  Access Paper or Ask Questions