Picture for Xingjian Li

Xingjian Li

Vox-UDA: Voxel-wise Unsupervised Domain Adaptation for Cryo-Electron Subtomogram Segmentation with Denoised Pseudo Labeling

Add code
Jun 25, 2024
Viaarxiv icon

Photorealistic Robotic Simulation using Unreal Engine 5 for Agricultural Applications

Add code
May 28, 2024
Viaarxiv icon

Robust Cross-Modal Knowledge Distillation for Unconstrained Videos

Add code
Apr 27, 2023
Figure 1 for Robust Cross-Modal Knowledge Distillation for Unconstrained Videos
Figure 2 for Robust Cross-Modal Knowledge Distillation for Unconstrained Videos
Figure 3 for Robust Cross-Modal Knowledge Distillation for Unconstrained Videos
Figure 4 for Robust Cross-Modal Knowledge Distillation for Unconstrained Videos
Viaarxiv icon

Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources

Add code
Jul 14, 2022
Figure 1 for Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
Figure 2 for Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
Figure 3 for Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
Figure 4 for Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
Viaarxiv icon

Fine-tuning Pre-trained Language Models with Noise Stability Regularization

Add code
Jun 12, 2022
Figure 1 for Fine-tuning Pre-trained Language Models with Noise Stability Regularization
Figure 2 for Fine-tuning Pre-trained Language Models with Noise Stability Regularization
Figure 3 for Fine-tuning Pre-trained Language Models with Noise Stability Regularization
Figure 4 for Fine-tuning Pre-trained Language Models with Noise Stability Regularization
Viaarxiv icon

Deep Active Learning with Noise Stability

Add code
May 26, 2022
Figure 1 for Deep Active Learning with Noise Stability
Figure 2 for Deep Active Learning with Noise Stability
Figure 3 for Deep Active Learning with Noise Stability
Figure 4 for Deep Active Learning with Noise Stability
Viaarxiv icon

Inadequately Pre-trained Models are Better Feature Extractors

Add code
Mar 09, 2022
Figure 1 for Inadequately Pre-trained Models are Better Feature Extractors
Figure 2 for Inadequately Pre-trained Models are Better Feature Extractors
Figure 3 for Inadequately Pre-trained Models are Better Feature Extractors
Figure 4 for Inadequately Pre-trained Models are Better Feature Extractors
Viaarxiv icon

Boosting Active Learning via Improving Test Performance

Add code
Dec 10, 2021
Figure 1 for Boosting Active Learning via Improving Test Performance
Figure 2 for Boosting Active Learning via Improving Test Performance
Figure 3 for Boosting Active Learning via Improving Test Performance
Figure 4 for Boosting Active Learning via Improving Test Performance
Viaarxiv icon

Noise Stability Regularization for Improving BERT Fine-tuning

Add code
Jul 10, 2021
Figure 1 for Noise Stability Regularization for Improving BERT Fine-tuning
Figure 2 for Noise Stability Regularization for Improving BERT Fine-tuning
Figure 3 for Noise Stability Regularization for Improving BERT Fine-tuning
Figure 4 for Noise Stability Regularization for Improving BERT Fine-tuning
Viaarxiv icon

SMILE: Self-Distilled MIxup for Efficient Transfer LEarning

Add code
Mar 25, 2021
Figure 1 for SMILE: Self-Distilled MIxup for Efficient Transfer LEarning
Figure 2 for SMILE: Self-Distilled MIxup for Efficient Transfer LEarning
Figure 3 for SMILE: Self-Distilled MIxup for Efficient Transfer LEarning
Figure 4 for SMILE: Self-Distilled MIxup for Efficient Transfer LEarning
Viaarxiv icon