Picture for Ruixiang Tang

Ruixiang Tang

TACO: Enhancing Multimodal In-context Learning via Task Mapping-Guided Sequence Configuration

Add code
May 21, 2025
Viaarxiv icon

CAMA: Enhancing Multimodal In-Context Learning with Context-Aware Modulated Attention

Add code
May 21, 2025
Viaarxiv icon

M2IV: Towards Efficient and Fine-grained Multimodal In-Context Learning in Large Vision-Language Models

Add code
Apr 06, 2025
Viaarxiv icon

EAZY: Eliminating Hallucinations in LVLMs by Zeroing out Hallucinatory Image Tokens

Add code
Mar 10, 2025
Viaarxiv icon

DBR: Divergence-Based Regularization for Debiasing Natural Language Understanding Models

Add code
Feb 25, 2025
Viaarxiv icon

Can Large Vision-Language Models Detect Images Copyright Infringement from GenAI?

Add code
Feb 23, 2025
Viaarxiv icon

Massive Values in Self-Attention Modules are the Key to Contextual Knowledge Understanding

Add code
Feb 03, 2025
Viaarxiv icon

Survey and Improvement Strategies for Gene Prioritization with Large Language Models

Add code
Jan 30, 2025
Figure 1 for Survey and Improvement Strategies for Gene Prioritization with Large Language Models
Figure 2 for Survey and Improvement Strategies for Gene Prioritization with Large Language Models
Figure 3 for Survey and Improvement Strategies for Gene Prioritization with Large Language Models
Viaarxiv icon

Decoding Knowledge in Large Language Models: A Framework for Categorization and Comprehension

Add code
Jan 02, 2025
Viaarxiv icon

Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics

Add code
Nov 22, 2024
Figure 1 for Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics
Figure 2 for Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics
Figure 3 for Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics
Figure 4 for Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics
Viaarxiv icon