Picture for Zhi-Hong Deng

Zhi-Hong Deng

Empowering Large Language Model Agents through Action Learning

Add code
Feb 24, 2024
Figure 1 for Empowering Large Language Model Agents through Action Learning
Figure 2 for Empowering Large Language Model Agents through Action Learning
Figure 3 for Empowering Large Language Model Agents through Action Learning
Figure 4 for Empowering Large Language Model Agents through Action Learning
Viaarxiv icon

Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy

Add code
Jul 31, 2023
Figure 1 for Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy
Figure 2 for Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy
Figure 3 for Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy
Figure 4 for Revisiting the Parameter Efficiency of Adapters from the Perspective of Precision Redundancy
Viaarxiv icon

Dual-Alignment Pre-training for Cross-lingual Sentence Embedding

Add code
May 16, 2023
Figure 1 for Dual-Alignment Pre-training for Cross-lingual Sentence Embedding
Figure 2 for Dual-Alignment Pre-training for Cross-lingual Sentence Embedding
Figure 3 for Dual-Alignment Pre-training for Cross-lingual Sentence Embedding
Figure 4 for Dual-Alignment Pre-training for Cross-lingual Sentence Embedding
Viaarxiv icon

Masked Image Modeling with Local Multi-Scale Reconstruction

Add code
Mar 09, 2023
Viaarxiv icon

Are More Layers Beneficial to Graph Transformers?

Add code
Mar 01, 2023
Viaarxiv icon

Detachedly Learn a Classifier for Class-Incremental Learning

Add code
Feb 23, 2023
Figure 1 for Detachedly Learn a Classifier for Class-Incremental Learning
Figure 2 for Detachedly Learn a Classifier for Class-Incremental Learning
Figure 3 for Detachedly Learn a Classifier for Class-Incremental Learning
Figure 4 for Detachedly Learn a Classifier for Class-Incremental Learning
Viaarxiv icon

FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer

Add code
Dec 06, 2022
Viaarxiv icon

Convolutional Bypasses Are Better Vision Transformer Adapters

Add code
Jul 18, 2022
Figure 1 for Convolutional Bypasses Are Better Vision Transformer Adapters
Figure 2 for Convolutional Bypasses Are Better Vision Transformer Adapters
Figure 3 for Convolutional Bypasses Are Better Vision Transformer Adapters
Figure 4 for Convolutional Bypasses Are Better Vision Transformer Adapters
Viaarxiv icon

Certified Robustness Against Natural Language Attacks by Causal Intervention

Add code
May 26, 2022
Figure 1 for Certified Robustness Against Natural Language Attacks by Causal Intervention
Figure 2 for Certified Robustness Against Natural Language Attacks by Causal Intervention
Figure 3 for Certified Robustness Against Natural Language Attacks by Causal Intervention
Figure 4 for Certified Robustness Against Natural Language Attacks by Causal Intervention
Viaarxiv icon

Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework

Add code
May 19, 2022
Figure 1 for Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework
Figure 2 for Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework
Figure 3 for Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework
Figure 4 for Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework
Viaarxiv icon