Picture for Hongyu Wang

Hongyu Wang

Temporal Adaptive RGBT Tracking with Modality Prompt

Add code
Jan 02, 2024
Viaarxiv icon

BitNet: Scaling 1-bit Transformers for Large Language Models

Add code
Oct 17, 2023
Viaarxiv icon

PREFER: Prompt Ensemble Learning via Feedback-Reflect-Refine

Add code
Aug 23, 2023
Viaarxiv icon

Hyperspectral Target Detection Based on Low-Rank Background Subspace Learning and Graph Laplacian Regularization

Add code
Jun 01, 2023
Viaarxiv icon

The state-of-the-art 3D anisotropic intracranial hemorrhage segmentation on non-contrast head CT: The INSTANCE challenge

Add code
Jan 12, 2023
Viaarxiv icon

TorchScale: Transformers at Scale

Add code
Nov 23, 2022
Viaarxiv icon

Foundation Transformers

Add code
Oct 19, 2022
Figure 1 for Foundation Transformers
Figure 2 for Foundation Transformers
Figure 3 for Foundation Transformers
Figure 4 for Foundation Transformers
Viaarxiv icon

End-User Puppeteering of Expressive Movements

Add code
Jul 25, 2022
Figure 1 for End-User Puppeteering of Expressive Movements
Figure 2 for End-User Puppeteering of Expressive Movements
Figure 3 for End-User Puppeteering of Expressive Movements
Figure 4 for End-User Puppeteering of Expressive Movements
Viaarxiv icon

Cross-domain Few-shot Meta-learning Using Stacking

Add code
May 12, 2022
Figure 1 for Cross-domain Few-shot Meta-learning Using Stacking
Figure 2 for Cross-domain Few-shot Meta-learning Using Stacking
Figure 3 for Cross-domain Few-shot Meta-learning Using Stacking
Figure 4 for Cross-domain Few-shot Meta-learning Using Stacking
Viaarxiv icon

DeepNet: Scaling Transformers to 1,000 Layers

Add code
Mar 01, 2022
Figure 1 for DeepNet: Scaling Transformers to 1,000 Layers
Figure 2 for DeepNet: Scaling Transformers to 1,000 Layers
Figure 3 for DeepNet: Scaling Transformers to 1,000 Layers
Figure 4 for DeepNet: Scaling Transformers to 1,000 Layers
Viaarxiv icon