Alert button
Picture for Jianxin Wu

Jianxin Wu

Alert button

Dense Vision Transformer Compression with Few Samples

Add code
Bookmark button
Alert button
Mar 27, 2024
Hanxiao Zhang, Yifan Zhou, Guo-Hua Wang, Jianxin Wu

Viaarxiv icon

DiffuLT: How to Make Diffusion Model Useful for Long-tail Recognition

Add code
Bookmark button
Alert button
Mar 08, 2024
Jie Shao, Ke Zhu, Hanxiao Zhang, Jianxin Wu

Figure 1 for DiffuLT: How to Make Diffusion Model Useful for Long-tail Recognition
Figure 2 for DiffuLT: How to Make Diffusion Model Useful for Long-tail Recognition
Figure 3 for DiffuLT: How to Make Diffusion Model Useful for Long-tail Recognition
Figure 4 for DiffuLT: How to Make Diffusion Model Useful for Long-tail Recognition
Viaarxiv icon

Low-rank Attention Side-Tuning for Parameter-Efficient Fine-Tuning

Add code
Bookmark button
Alert button
Feb 06, 2024
Ningyuan Tang, Minghao Fu, Ke Zhu, Jianxin Wu

Viaarxiv icon

Rectify the Regression Bias in Long-Tailed Object Detection

Add code
Bookmark button
Alert button
Jan 31, 2024
Ke Zhu, Minghao Fu, Jie Shao, Tianyu Liu, Jianxin Wu

Viaarxiv icon

Reviving Undersampling for Long-Tailed Learning

Add code
Bookmark button
Alert button
Jan 30, 2024
Hao Yu, Yingxiao Du, Jianxin Wu

Viaarxiv icon

DTL: Disentangled Transfer Learning for Visual Recognition

Add code
Bookmark button
Alert button
Dec 13, 2023
Minghao Fu, Ke Zhu, Jianxin Wu

Viaarxiv icon

Multi-Label Self-Supervised Learning with Scene Images

Add code
Bookmark button
Alert button
Aug 08, 2023
Ke Zhu, Minghao Fu, Jianxin Wu

Figure 1 for Multi-Label Self-Supervised Learning with Scene Images
Figure 2 for Multi-Label Self-Supervised Learning with Scene Images
Figure 3 for Multi-Label Self-Supervised Learning with Scene Images
Figure 4 for Multi-Label Self-Supervised Learning with Scene Images
Viaarxiv icon

Quantized Feature Distillation for Network Quantization

Add code
Bookmark button
Alert button
Jul 20, 2023
Ke Zhu, Yin-Yin He, Jianxin Wu

Figure 1 for Quantized Feature Distillation for Network Quantization
Figure 2 for Quantized Feature Distillation for Network Quantization
Figure 3 for Quantized Feature Distillation for Network Quantization
Figure 4 for Quantized Feature Distillation for Network Quantization
Viaarxiv icon

Coarse Is Better? A New Pipeline Towards Self-Supervised Learning with Uncurated Images

Add code
Bookmark button
Alert button
Jun 08, 2023
Ke Zhu, Yin-Yin He, Jianxin Wu

Figure 1 for Coarse Is Better? A New Pipeline Towards Self-Supervised Learning with Uncurated Images
Figure 2 for Coarse Is Better? A New Pipeline Towards Self-Supervised Learning with Uncurated Images
Figure 3 for Coarse Is Better? A New Pipeline Towards Self-Supervised Learning with Uncurated Images
Figure 4 for Coarse Is Better? A New Pipeline Towards Self-Supervised Learning with Uncurated Images
Viaarxiv icon