Alert button
Picture for Xingjian Li

Xingjian Li

Alert button

Robust Cross-Modal Knowledge Distillation for Unconstrained Videos

Add code
Bookmark button
Alert button
Apr 27, 2023
Wenke Xia, Xingjian Li, Andong Deng, Haoyi Xiong, Dejing Dou, Di Hu

Figure 1 for Robust Cross-Modal Knowledge Distillation for Unconstrained Videos
Figure 2 for Robust Cross-Modal Knowledge Distillation for Unconstrained Videos
Figure 3 for Robust Cross-Modal Knowledge Distillation for Unconstrained Videos
Figure 4 for Robust Cross-Modal Knowledge Distillation for Unconstrained Videos
Viaarxiv icon

Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources

Add code
Bookmark button
Alert button
Jul 14, 2022
Ji Liu, Daxiang Dong, Xi Wang, An Qin, Xingjian Li, Patrick Valduriez, Dejing Dou, Dianhai Yu

Figure 1 for Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
Figure 2 for Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
Figure 3 for Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
Figure 4 for Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
Viaarxiv icon

Fine-tuning Pre-trained Language Models with Noise Stability Regularization

Add code
Bookmark button
Alert button
Jun 12, 2022
Hang Hua, Xingjian Li, Dejing Dou, Cheng-Zhong Xu, Jiebo Luo

Figure 1 for Fine-tuning Pre-trained Language Models with Noise Stability Regularization
Figure 2 for Fine-tuning Pre-trained Language Models with Noise Stability Regularization
Figure 3 for Fine-tuning Pre-trained Language Models with Noise Stability Regularization
Figure 4 for Fine-tuning Pre-trained Language Models with Noise Stability Regularization
Viaarxiv icon

Deep Active Learning with Noise Stability

Add code
Bookmark button
Alert button
May 26, 2022
Xingjian Li, Pengkun Yang, Tianyang Wang, Xueying Zhan, Min Xu, Dejing Dou, Chengzhong Xu

Figure 1 for Deep Active Learning with Noise Stability
Figure 2 for Deep Active Learning with Noise Stability
Figure 3 for Deep Active Learning with Noise Stability
Figure 4 for Deep Active Learning with Noise Stability
Viaarxiv icon

Inadequately Pre-trained Models are Better Feature Extractors

Add code
Bookmark button
Alert button
Mar 09, 2022
Andong Deng, Xingjian Li, Zhibing Li, Di Hu, Chengzhong Xu, Dejing Dou

Figure 1 for Inadequately Pre-trained Models are Better Feature Extractors
Figure 2 for Inadequately Pre-trained Models are Better Feature Extractors
Figure 3 for Inadequately Pre-trained Models are Better Feature Extractors
Figure 4 for Inadequately Pre-trained Models are Better Feature Extractors
Viaarxiv icon

Boosting Active Learning via Improving Test Performance

Add code
Bookmark button
Alert button
Dec 10, 2021
Tianyang Wang, Xingjian Li, Pengkun Yang, Guosheng Hu, Xiangrui Zeng, Siyu Huang, Cheng-Zhong Xu, Min Xu

Figure 1 for Boosting Active Learning via Improving Test Performance
Figure 2 for Boosting Active Learning via Improving Test Performance
Figure 3 for Boosting Active Learning via Improving Test Performance
Figure 4 for Boosting Active Learning via Improving Test Performance
Viaarxiv icon

Noise Stability Regularization for Improving BERT Fine-tuning

Add code
Bookmark button
Alert button
Jul 10, 2021
Hang Hua, Xingjian Li, Dejing Dou, Cheng-Zhong Xu, Jiebo Luo

Figure 1 for Noise Stability Regularization for Improving BERT Fine-tuning
Figure 2 for Noise Stability Regularization for Improving BERT Fine-tuning
Figure 3 for Noise Stability Regularization for Improving BERT Fine-tuning
Figure 4 for Noise Stability Regularization for Improving BERT Fine-tuning
Viaarxiv icon

SMILE: Self-Distilled MIxup for Efficient Transfer LEarning

Add code
Bookmark button
Alert button
Mar 25, 2021
Xingjian Li, Haoyi Xiong, Chengzhong Xu, Dejing Dou

Figure 1 for SMILE: Self-Distilled MIxup for Efficient Transfer LEarning
Figure 2 for SMILE: Self-Distilled MIxup for Efficient Transfer LEarning
Figure 3 for SMILE: Self-Distilled MIxup for Efficient Transfer LEarning
Figure 4 for SMILE: Self-Distilled MIxup for Efficient Transfer LEarning
Viaarxiv icon

Interpretable Deep Learning: Interpretations, Interpretability, Trustworthiness, and Beyond

Add code
Bookmark button
Alert button
Mar 19, 2021
Xuhong Li, Haoyi Xiong, Xingjian Li, Xuanyu Wu, Xiao Zhang, Ji Liu, Jiang Bian, Dejing Dou

Figure 1 for Interpretable Deep Learning: Interpretations, Interpretability, Trustworthiness, and Beyond
Figure 2 for Interpretable Deep Learning: Interpretations, Interpretability, Trustworthiness, and Beyond
Figure 3 for Interpretable Deep Learning: Interpretations, Interpretability, Trustworthiness, and Beyond
Figure 4 for Interpretable Deep Learning: Interpretations, Interpretability, Trustworthiness, and Beyond
Viaarxiv icon