Picture for Yuanbin Wu

Yuanbin Wu

On Support Samples of Next Word Prediction

Add code
Jun 09, 2025
Viaarxiv icon

Protein Design with Dynamic Protein Vocabulary

Add code
May 25, 2025
Viaarxiv icon

PDFBench: A Benchmark for De novo Protein Design from Function

Add code
May 25, 2025
Viaarxiv icon

The Role of Visual Modality in Multimodal Mathematical Reasoning: Challenges and Insights

Add code
Mar 06, 2025
Viaarxiv icon

Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs

Add code
Feb 20, 2025
Viaarxiv icon

EvoLlama: Enhancing LLMs' Understanding of Proteins via Multimodal Structure and Sequence Representations

Add code
Dec 16, 2024
Figure 1 for EvoLlama: Enhancing LLMs' Understanding of Proteins via Multimodal Structure and Sequence Representations
Figure 2 for EvoLlama: Enhancing LLMs' Understanding of Proteins via Multimodal Structure and Sequence Representations
Figure 3 for EvoLlama: Enhancing LLMs' Understanding of Proteins via Multimodal Structure and Sequence Representations
Figure 4 for EvoLlama: Enhancing LLMs' Understanding of Proteins via Multimodal Structure and Sequence Representations
Viaarxiv icon

AntLM: Bridging Causal and Masked Language Models

Add code
Dec 04, 2024
Viaarxiv icon

Generation with Dynamic Vocabulary

Add code
Oct 11, 2024
Figure 1 for Generation with Dynamic Vocabulary
Figure 2 for Generation with Dynamic Vocabulary
Figure 3 for Generation with Dynamic Vocabulary
Figure 4 for Generation with Dynamic Vocabulary
Viaarxiv icon

Investigating and Mitigating Object Hallucinations in Pretrained Vision-Language (CLIP) Models

Add code
Oct 04, 2024
Figure 1 for Investigating and Mitigating Object Hallucinations in Pretrained Vision-Language (CLIP) Models
Figure 2 for Investigating and Mitigating Object Hallucinations in Pretrained Vision-Language (CLIP) Models
Figure 3 for Investigating and Mitigating Object Hallucinations in Pretrained Vision-Language (CLIP) Models
Figure 4 for Investigating and Mitigating Object Hallucinations in Pretrained Vision-Language (CLIP) Models
Viaarxiv icon

CERD: A Comprehensive Chinese Rhetoric Dataset for Rhetorical Understanding and Generation in Essays

Add code
Sep 29, 2024
Figure 1 for CERD: A Comprehensive Chinese Rhetoric Dataset for Rhetorical Understanding and Generation in Essays
Figure 2 for CERD: A Comprehensive Chinese Rhetoric Dataset for Rhetorical Understanding and Generation in Essays
Figure 3 for CERD: A Comprehensive Chinese Rhetoric Dataset for Rhetorical Understanding and Generation in Essays
Figure 4 for CERD: A Comprehensive Chinese Rhetoric Dataset for Rhetorical Understanding and Generation in Essays
Viaarxiv icon