Picture for Xue Geng

Xue Geng

LPViT: Low-Power Semi-structured Pruning for Vision Transformers

Add code
Jul 02, 2024
Viaarxiv icon

DM3D: Distortion-Minimized Weight Pruning for Lossless 3D Object Detection

Add code
Jul 02, 2024
Viaarxiv icon

From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks

Add code
May 09, 2024
Viaarxiv icon

Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks

Add code
Aug 24, 2023
Viaarxiv icon

CRAFT: Cross-Attentional Flow Transformer for Robust Optical Flow

Add code
Mar 31, 2022
Figure 1 for CRAFT: Cross-Attentional Flow Transformer for Robust Optical Flow
Figure 2 for CRAFT: Cross-Attentional Flow Transformer for Robust Optical Flow
Figure 3 for CRAFT: Cross-Attentional Flow Transformer for Robust Optical Flow
Figure 4 for CRAFT: Cross-Attentional Flow Transformer for Robust Optical Flow
Viaarxiv icon

Role-Wise Data Augmentation for Knowledge Distillation

Add code
Apr 19, 2020
Figure 1 for Role-Wise Data Augmentation for Knowledge Distillation
Figure 2 for Role-Wise Data Augmentation for Knowledge Distillation
Figure 3 for Role-Wise Data Augmentation for Knowledge Distillation
Figure 4 for Role-Wise Data Augmentation for Knowledge Distillation
Viaarxiv icon

Dataflow-based Joint Quantization of Weights and Activations for Deep Neural Networks

Add code
Jan 04, 2019
Figure 1 for Dataflow-based Joint Quantization of Weights and Activations for Deep Neural Networks
Figure 2 for Dataflow-based Joint Quantization of Weights and Activations for Deep Neural Networks
Figure 3 for Dataflow-based Joint Quantization of Weights and Activations for Deep Neural Networks
Figure 4 for Dataflow-based Joint Quantization of Weights and Activations for Deep Neural Networks
Viaarxiv icon