Picture for Xinting Huang

Xinting Huang

UNCLE: Uncertainty Expressions in Long-Form Generation

Add code
May 22, 2025
Viaarxiv icon

Hunyuan-TurboS: Advancing Large Language Models through Mamba-Transformer Synergy and Adaptive Chain-of-Thought

Add code
May 21, 2025
Viaarxiv icon

Low-hallucination Synthetic Captions for Large-Scale Vision-Language Model Pre-training

Add code
Apr 17, 2025
Viaarxiv icon

Contextualize-then-Aggregate: Circuits for In-Context Learning in Gemma-2 2B

Add code
Mar 31, 2025
Viaarxiv icon

Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers

Add code
Feb 04, 2025
Viaarxiv icon

A Silver Bullet or a Compromise for Full Attention? A Comprehensive Study of Gist Token-based Context Compression

Add code
Dec 23, 2024
Figure 1 for A Silver Bullet or a Compromise for Full Attention? A Comprehensive Study of Gist Token-based Context Compression
Figure 2 for A Silver Bullet or a Compromise for Full Attention? A Comprehensive Study of Gist Token-based Context Compression
Figure 3 for A Silver Bullet or a Compromise for Full Attention? A Comprehensive Study of Gist Token-based Context Compression
Figure 4 for A Silver Bullet or a Compromise for Full Attention? A Comprehensive Study of Gist Token-based Context Compression
Viaarxiv icon

Attention Entropy is a Key Factor: An Analysis of Parallel Context Encoding with Full-attention-based Pre-trained Language Models

Add code
Dec 21, 2024
Figure 1 for Attention Entropy is a Key Factor: An Analysis of Parallel Context Encoding with Full-attention-based Pre-trained Language Models
Figure 2 for Attention Entropy is a Key Factor: An Analysis of Parallel Context Encoding with Full-attention-based Pre-trained Language Models
Figure 3 for Attention Entropy is a Key Factor: An Analysis of Parallel Context Encoding with Full-attention-based Pre-trained Language Models
Figure 4 for Attention Entropy is a Key Factor: An Analysis of Parallel Context Encoding with Full-attention-based Pre-trained Language Models
Viaarxiv icon

LoGU: Long-form Generation with Uncertainty Expressions

Add code
Oct 18, 2024
Figure 1 for LoGU: Long-form Generation with Uncertainty Expressions
Figure 2 for LoGU: Long-form Generation with Uncertainty Expressions
Figure 3 for LoGU: Long-form Generation with Uncertainty Expressions
Figure 4 for LoGU: Long-form Generation with Uncertainty Expressions
Viaarxiv icon

Atomic Calibration of LLMs in Long-Form Generations

Add code
Oct 17, 2024
Viaarxiv icon

Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability

Add code
Oct 15, 2024
Figure 1 for Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability
Figure 2 for Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability
Figure 3 for Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability
Figure 4 for Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability
Viaarxiv icon