Picture for Zhenxiang Xiao

Zhenxiang Xiao

Ophtha-LLaMA2: A Large Language Model for Ophthalmology

Add code
Dec 08, 2023
Viaarxiv icon

Instruction-ViT: Multi-Modal Prompts for Instruction Learning in ViT

Add code
Apr 29, 2023
Figure 1 for Instruction-ViT: Multi-Modal Prompts for Instruction Learning in ViT
Figure 2 for Instruction-ViT: Multi-Modal Prompts for Instruction Learning in ViT
Figure 3 for Instruction-ViT: Multi-Modal Prompts for Instruction Learning in ViT
Figure 4 for Instruction-ViT: Multi-Modal Prompts for Instruction Learning in ViT
Viaarxiv icon

Coupling Visual Semantics of Artificial Neural Networks and Human Brain Function via Synchronized Activations

Add code
Jun 22, 2022
Figure 1 for Coupling Visual Semantics of Artificial Neural Networks and Human Brain Function via Synchronized Activations
Figure 2 for Coupling Visual Semantics of Artificial Neural Networks and Human Brain Function via Synchronized Activations
Figure 3 for Coupling Visual Semantics of Artificial Neural Networks and Human Brain Function via Synchronized Activations
Figure 4 for Coupling Visual Semantics of Artificial Neural Networks and Human Brain Function via Synchronized Activations
Viaarxiv icon

Eye-gaze-guided Vision Transformer for Rectifying Shortcut Learning

Add code
May 25, 2022
Figure 1 for Eye-gaze-guided Vision Transformer for Rectifying Shortcut Learning
Figure 2 for Eye-gaze-guided Vision Transformer for Rectifying Shortcut Learning
Figure 3 for Eye-gaze-guided Vision Transformer for Rectifying Shortcut Learning
Figure 4 for Eye-gaze-guided Vision Transformer for Rectifying Shortcut Learning
Viaarxiv icon

Mask-guided Vision Transformer for Few-Shot Learning

Add code
May 20, 2022
Figure 1 for Mask-guided Vision Transformer  for Few-Shot Learning
Figure 2 for Mask-guided Vision Transformer  for Few-Shot Learning
Figure 3 for Mask-guided Vision Transformer  for Few-Shot Learning
Figure 4 for Mask-guided Vision Transformer  for Few-Shot Learning
Viaarxiv icon