Picture for Xingjian Li

Xingjian Li

Robust Cross-Modal Knowledge Distillation for Unconstrained Videos

Add code
Apr 27, 2023
Viaarxiv icon

Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources

Add code
Jul 14, 2022
Figure 1 for Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
Figure 2 for Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
Figure 3 for Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
Figure 4 for Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
Viaarxiv icon

Fine-tuning Pre-trained Language Models with Noise Stability Regularization

Add code
Jun 12, 2022
Figure 1 for Fine-tuning Pre-trained Language Models with Noise Stability Regularization
Figure 2 for Fine-tuning Pre-trained Language Models with Noise Stability Regularization
Figure 3 for Fine-tuning Pre-trained Language Models with Noise Stability Regularization
Figure 4 for Fine-tuning Pre-trained Language Models with Noise Stability Regularization
Viaarxiv icon

Deep Active Learning with Noise Stability

Add code
May 26, 2022
Figure 1 for Deep Active Learning with Noise Stability
Figure 2 for Deep Active Learning with Noise Stability
Figure 3 for Deep Active Learning with Noise Stability
Figure 4 for Deep Active Learning with Noise Stability
Viaarxiv icon

Inadequately Pre-trained Models are Better Feature Extractors

Add code
Mar 09, 2022
Figure 1 for Inadequately Pre-trained Models are Better Feature Extractors
Figure 2 for Inadequately Pre-trained Models are Better Feature Extractors
Figure 3 for Inadequately Pre-trained Models are Better Feature Extractors
Figure 4 for Inadequately Pre-trained Models are Better Feature Extractors
Viaarxiv icon

Boosting Active Learning via Improving Test Performance

Add code
Dec 10, 2021
Figure 1 for Boosting Active Learning via Improving Test Performance
Figure 2 for Boosting Active Learning via Improving Test Performance
Figure 3 for Boosting Active Learning via Improving Test Performance
Figure 4 for Boosting Active Learning via Improving Test Performance
Viaarxiv icon

Noise Stability Regularization for Improving BERT Fine-tuning

Add code
Jul 10, 2021
Figure 1 for Noise Stability Regularization for Improving BERT Fine-tuning
Figure 2 for Noise Stability Regularization for Improving BERT Fine-tuning
Figure 3 for Noise Stability Regularization for Improving BERT Fine-tuning
Figure 4 for Noise Stability Regularization for Improving BERT Fine-tuning
Viaarxiv icon

SMILE: Self-Distilled MIxup for Efficient Transfer LEarning

Add code
Mar 25, 2021
Figure 1 for SMILE: Self-Distilled MIxup for Efficient Transfer LEarning
Figure 2 for SMILE: Self-Distilled MIxup for Efficient Transfer LEarning
Figure 3 for SMILE: Self-Distilled MIxup for Efficient Transfer LEarning
Figure 4 for SMILE: Self-Distilled MIxup for Efficient Transfer LEarning
Viaarxiv icon

Interpretable Deep Learning: Interpretations, Interpretability, Trustworthiness, and Beyond

Add code
Mar 19, 2021
Figure 1 for Interpretable Deep Learning: Interpretations, Interpretability, Trustworthiness, and Beyond
Figure 2 for Interpretable Deep Learning: Interpretations, Interpretability, Trustworthiness, and Beyond
Figure 3 for Interpretable Deep Learning: Interpretations, Interpretability, Trustworthiness, and Beyond
Figure 4 for Interpretable Deep Learning: Interpretations, Interpretability, Trustworthiness, and Beyond
Viaarxiv icon

Adaptive Consistency Regularization for Semi-Supervised Transfer Learning

Add code
Mar 03, 2021
Figure 1 for Adaptive Consistency Regularization for Semi-Supervised Transfer Learning
Figure 2 for Adaptive Consistency Regularization for Semi-Supervised Transfer Learning
Figure 3 for Adaptive Consistency Regularization for Semi-Supervised Transfer Learning
Figure 4 for Adaptive Consistency Regularization for Semi-Supervised Transfer Learning
Viaarxiv icon