Alert button
Picture for Zhi-Hong Deng

Zhi-Hong Deng

Alert button

Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning

Add code
Bookmark button
Alert button
May 09, 2024
Shibo Jie, Yehui Tang, Ning Ding, Zhi-Hong Deng, Kai Han, Yunhe Wang

Viaarxiv icon

Empowering Large Language Model Agents through Action Learning

Add code
Bookmark button
Alert button
Feb 24, 2024
Haiteng Zhao, Chang Ma, Guoyin Wang, Jing Su, Lingpeng Kong, Jingjing Xu, Zhi-Hong Deng, Hongxia Yang

Viaarxiv icon

Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy

Add code
Bookmark button
Alert button
Jul 31, 2023
Shibo Jie, Haoqing Wang, Zhi-Hong Deng

Viaarxiv icon

Dual-Alignment Pre-training for Cross-lingual Sentence Embedding

Add code
Bookmark button
Alert button
May 16, 2023
Ziheng Li, Shaohan Huang, Zihan Zhang, Zhi-Hong Deng, Qiang Lou, Haizhen Huang, Jian Jiao, Furu Wei, Weiwei Deng, Qi Zhang

Figure 1 for Dual-Alignment Pre-training for Cross-lingual Sentence Embedding
Figure 2 for Dual-Alignment Pre-training for Cross-lingual Sentence Embedding
Figure 3 for Dual-Alignment Pre-training for Cross-lingual Sentence Embedding
Figure 4 for Dual-Alignment Pre-training for Cross-lingual Sentence Embedding
Viaarxiv icon

Masked Image Modeling with Local Multi-Scale Reconstruction

Add code
Bookmark button
Alert button
Mar 09, 2023
Haoqing Wang, Yehui Tang, Yunhe Wang, Jianyuan Guo, Zhi-Hong Deng, Kai Han

Figure 1 for Masked Image Modeling with Local Multi-Scale Reconstruction
Figure 2 for Masked Image Modeling with Local Multi-Scale Reconstruction
Figure 3 for Masked Image Modeling with Local Multi-Scale Reconstruction
Figure 4 for Masked Image Modeling with Local Multi-Scale Reconstruction
Viaarxiv icon

Are More Layers Beneficial to Graph Transformers?

Add code
Bookmark button
Alert button
Mar 01, 2023
Haiteng Zhao, Shuming Ma, Dongdong Zhang, Zhi-Hong Deng, Furu Wei

Figure 1 for Are More Layers Beneficial to Graph Transformers?
Figure 2 for Are More Layers Beneficial to Graph Transformers?
Figure 3 for Are More Layers Beneficial to Graph Transformers?
Figure 4 for Are More Layers Beneficial to Graph Transformers?
Viaarxiv icon

Detachedly Learn a Classifier for Class-Incremental Learning

Add code
Bookmark button
Alert button
Feb 23, 2023
Ziheng Li, Shibo Jie, Zhi-Hong Deng

Figure 1 for Detachedly Learn a Classifier for Class-Incremental Learning
Figure 2 for Detachedly Learn a Classifier for Class-Incremental Learning
Figure 3 for Detachedly Learn a Classifier for Class-Incremental Learning
Figure 4 for Detachedly Learn a Classifier for Class-Incremental Learning
Viaarxiv icon

FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer

Add code
Bookmark button
Alert button
Dec 06, 2022
Shibo Jie, Zhi-Hong Deng

Figure 1 for FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer
Figure 2 for FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer
Figure 3 for FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer
Figure 4 for FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer
Viaarxiv icon

Convolutional Bypasses Are Better Vision Transformer Adapters

Add code
Bookmark button
Alert button
Jul 18, 2022
Shibo Jie, Zhi-Hong Deng

Figure 1 for Convolutional Bypasses Are Better Vision Transformer Adapters
Figure 2 for Convolutional Bypasses Are Better Vision Transformer Adapters
Figure 3 for Convolutional Bypasses Are Better Vision Transformer Adapters
Figure 4 for Convolutional Bypasses Are Better Vision Transformer Adapters
Viaarxiv icon

Certified Robustness Against Natural Language Attacks by Causal Intervention

Add code
Bookmark button
Alert button
May 26, 2022
Haiteng Zhao, Chang Ma*, Xinshuai Dong, Anh Tuan Luu, Zhi-Hong Deng, Hanwang Zhang

Figure 1 for Certified Robustness Against Natural Language Attacks by Causal Intervention
Figure 2 for Certified Robustness Against Natural Language Attacks by Causal Intervention
Figure 3 for Certified Robustness Against Natural Language Attacks by Causal Intervention
Figure 4 for Certified Robustness Against Natural Language Attacks by Causal Intervention
Viaarxiv icon