Picture for Minghai Qin

Minghai Qin

Data Overfitting for On-Device Super-Resolution with Dynamic Algorithm and Compiler Co-Design

Add code
Jul 03, 2024
Viaarxiv icon

Towards High-Quality and Efficient Video Super-Resolution via Spatial-Temporal Data Overfitting

Add code
Mar 15, 2023
Figure 1 for Towards High-Quality and Efficient Video Super-Resolution via Spatial-Temporal Data Overfitting
Figure 2 for Towards High-Quality and Efficient Video Super-Resolution via Spatial-Temporal Data Overfitting
Figure 3 for Towards High-Quality and Efficient Video Super-Resolution via Spatial-Temporal Data Overfitting
Figure 4 for Towards High-Quality and Efficient Video Super-Resolution via Spatial-Temporal Data Overfitting
Viaarxiv icon

DISCO: Distributed Inference with Sparse Communications

Add code
Feb 22, 2023
Figure 1 for DISCO: Distributed Inference with Sparse Communications
Figure 2 for DISCO: Distributed Inference with Sparse Communications
Figure 3 for DISCO: Distributed Inference with Sparse Communications
Figure 4 for DISCO: Distributed Inference with Sparse Communications
Viaarxiv icon

All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management

Add code
Dec 09, 2022
Figure 1 for All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management
Figure 2 for All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management
Figure 3 for All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management
Figure 4 for All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management
Viaarxiv icon

Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors

Add code
Nov 22, 2022
Figure 1 for Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors
Figure 2 for Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors
Figure 3 for Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors
Figure 4 for Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors
Viaarxiv icon

Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training

Add code
Nov 19, 2022
Figure 1 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Figure 2 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Figure 3 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Figure 4 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Viaarxiv icon

The Lottery Ticket Hypothesis for Vision Transformers

Add code
Nov 02, 2022
Figure 1 for The Lottery Ticket Hypothesis for Vision Transformers
Figure 2 for The Lottery Ticket Hypothesis for Vision Transformers
Figure 3 for The Lottery Ticket Hypothesis for Vision Transformers
Figure 4 for The Lottery Ticket Hypothesis for Vision Transformers
Viaarxiv icon

Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution

Add code
Jul 25, 2022
Figure 1 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 2 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 3 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Figure 4 for Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Viaarxiv icon

CHEX: CHannel EXploration for CNN Model Compression

Add code
Mar 29, 2022
Figure 1 for CHEX: CHannel EXploration for CNN Model Compression
Figure 2 for CHEX: CHannel EXploration for CNN Model Compression
Figure 3 for CHEX: CHannel EXploration for CNN Model Compression
Figure 4 for CHEX: CHannel EXploration for CNN Model Compression
Viaarxiv icon

SPViT: Enabling Faster Vision Transformers via Soft Token Pruning

Add code
Dec 27, 2021
Figure 1 for SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
Figure 2 for SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
Figure 3 for SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
Figure 4 for SPViT: Enabling Faster Vision Transformers via Soft Token Pruning
Viaarxiv icon