Picture for Qiao Xiao

Qiao Xiao

Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness

Add code
Oct 03, 2024
Viaarxiv icon

Are Sparse Neural Networks Better Hard Sample Learners?

Add code
Sep 13, 2024
Viaarxiv icon

Nerva: a Truly Sparse Implementation of Neural Networks

Add code
Jul 24, 2024
Viaarxiv icon

Dynamic Data Pruning for Automatic Speech Recognition

Add code
Jun 26, 2024
Viaarxiv icon

MSRS: Training Multimodal Speech Recognition Models from Scratch with Sparse Mask Optimization

Add code
Jun 25, 2024
Viaarxiv icon

A Unified Framework for Unsupervised Domain Adaptation based on Instance Weighting

Add code
Dec 08, 2023
Figure 1 for A Unified Framework for Unsupervised Domain Adaptation based on Instance Weighting
Figure 2 for A Unified Framework for Unsupervised Domain Adaptation based on Instance Weighting
Figure 3 for A Unified Framework for Unsupervised Domain Adaptation based on Instance Weighting
Figure 4 for A Unified Framework for Unsupervised Domain Adaptation based on Instance Weighting
Viaarxiv icon

E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation

Add code
Dec 07, 2023
Viaarxiv icon

Dynamic Sparse Network for Time Series Classification: Learning What to "see''

Add code
Dec 19, 2022
Figure 1 for Dynamic Sparse Network for Time Series Classification: Learning What to "see''
Figure 2 for Dynamic Sparse Network for Time Series Classification: Learning What to "see''
Figure 3 for Dynamic Sparse Network for Time Series Classification: Learning What to "see''
Figure 4 for Dynamic Sparse Network for Time Series Classification: Learning What to "see''
Viaarxiv icon

More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity

Add code
Jul 07, 2022
Figure 1 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Figure 2 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Figure 3 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Figure 4 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Viaarxiv icon

Multi-Objective Meta Learning

Add code
Feb 14, 2021
Figure 1 for Multi-Objective Meta Learning
Figure 2 for Multi-Objective Meta Learning
Figure 3 for Multi-Objective Meta Learning
Figure 4 for Multi-Objective Meta Learning
Viaarxiv icon