Alert button
Picture for Dacheng Tao

Dacheng Tao

Alert button

DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers

Add code
Bookmark button
Alert button
Apr 28, 2022
Xianing Chen, Qiong Cao, Yujie Zhong, Jing Zhang, Shenghua Gao, Dacheng Tao

Figure 1 for DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers
Figure 2 for DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers
Figure 3 for DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers
Figure 4 for DearKD: Data-Efficient Early Knowledge Distillation for Vision Transformers
Viaarxiv icon

ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation

Add code
Bookmark button
Alert button
Apr 26, 2022
Yufei Xu, Jing Zhang, Qiming Zhang, Dacheng Tao

Figure 1 for ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation
Figure 2 for ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation
Figure 3 for ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation
Figure 4 for ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation
Viaarxiv icon

Neural Maximum A Posteriori Estimation on Unpaired Data for Motion Deblurring

Add code
Bookmark button
Alert button
Apr 26, 2022
Youjian Zhang, Chaoyue Wang, Dacheng Tao

Figure 1 for Neural Maximum A Posteriori Estimation on Unpaired Data for Motion Deblurring
Figure 2 for Neural Maximum A Posteriori Estimation on Unpaired Data for Motion Deblurring
Figure 3 for Neural Maximum A Posteriori Estimation on Unpaired Data for Motion Deblurring
Figure 4 for Neural Maximum A Posteriori Estimation on Unpaired Data for Motion Deblurring
Viaarxiv icon

BLISS: Robust Sequence-to-Sequence Learning via Self-Supervised Input Representation

Add code
Bookmark button
Alert button
Apr 24, 2022
Zheng Zhang, Liang Ding, Dazhao Cheng, Xuebo Liu, Min Zhang, Dacheng Tao

Figure 1 for BLISS: Robust Sequence-to-Sequence Learning via Self-Supervised Input Representation
Figure 2 for BLISS: Robust Sequence-to-Sequence Learning via Self-Supervised Input Representation
Figure 3 for BLISS: Robust Sequence-to-Sequence Learning via Self-Supervised Input Representation
Figure 4 for BLISS: Robust Sequence-to-Sequence Learning via Self-Supervised Input Representation
Viaarxiv icon

Source-Free Domain Adaptation via Distribution Estimation

Add code
Bookmark button
Alert button
Apr 24, 2022
Ning Ding, Yixing Xu, Yehui Tang, Chao Xu, Yunhe Wang, Dacheng Tao

Figure 1 for Source-Free Domain Adaptation via Distribution Estimation
Figure 2 for Source-Free Domain Adaptation via Distribution Estimation
Figure 3 for Source-Free Domain Adaptation via Distribution Estimation
Figure 4 for Source-Free Domain Adaptation via Distribution Estimation
Viaarxiv icon

A Model-Agnostic Data Manipulation Method for Persona-based Dialogue Generation

Add code
Bookmark button
Alert button
Apr 21, 2022
Yu Cao, Wei Bi, Meng Fang, Shuming Shi, Dacheng Tao

Figure 1 for A Model-Agnostic Data Manipulation Method for Persona-based Dialogue Generation
Figure 2 for A Model-Agnostic Data Manipulation Method for Persona-based Dialogue Generation
Figure 3 for A Model-Agnostic Data Manipulation Method for Persona-based Dialogue Generation
Figure 4 for A Model-Agnostic Data Manipulation Method for Persona-based Dialogue Generation
Viaarxiv icon

Neural Collapse Inspired Attraction-Repulsion-Balanced Loss for Imbalanced Learning

Add code
Bookmark button
Alert button
Apr 19, 2022
Liang Xie, Yibo Yang, Deng Cai, Dacheng Tao, Xiaofei He

Figure 1 for Neural Collapse Inspired Attraction-Repulsion-Balanced Loss for Imbalanced Learning
Figure 2 for Neural Collapse Inspired Attraction-Repulsion-Balanced Loss for Imbalanced Learning
Figure 3 for Neural Collapse Inspired Attraction-Repulsion-Balanced Loss for Imbalanced Learning
Figure 4 for Neural Collapse Inspired Attraction-Repulsion-Balanced Loss for Imbalanced Learning
Viaarxiv icon

VSA: Learning Varied-Size Window Attention in Vision Transformers

Add code
Bookmark button
Alert button
Apr 18, 2022
Qiming Zhang, Yufei Xu, Jing Zhang, Dacheng Tao

Figure 1 for VSA: Learning Varied-Size Window Attention in Vision Transformers
Figure 2 for VSA: Learning Varied-Size Window Attention in Vision Transformers
Figure 3 for VSA: Learning Varied-Size Window Attention in Vision Transformers
Figure 4 for VSA: Learning Varied-Size Window Attention in Vision Transformers
Viaarxiv icon

Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation

Add code
Bookmark button
Alert button
Apr 16, 2022
Changtong Zan, Liang Ding, Li Shen, Yu Cao, Weifeng Liu, Dacheng Tao

Figure 1 for Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation
Figure 2 for Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation
Figure 3 for Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation
Figure 4 for Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation
Viaarxiv icon