Picture for Bin Yu

Bin Yu

LangForce: Bayesian Decomposition of Vision Language Action Models via Latent Action Queries

Add code
Jan 27, 2026
Viaarxiv icon

BayesianVLA: Bayesian Decomposition of Vision Language Action Models via Latent Action Queries

Add code
Jan 21, 2026
Viaarxiv icon

TwinBrainVLA: Unleashing the Potential of Generalist VLMs for Embodied Tasks via Asymmetric Mixture-of-Transformers

Add code
Jan 20, 2026
Viaarxiv icon

LLMBoost: Make Large Language Models Stronger with Boosting

Add code
Dec 26, 2025
Viaarxiv icon

PhysBrain: Human Egocentric Data as a Bridge from Vision Language Models to Physical Intelligence

Add code
Dec 18, 2025
Figure 1 for PhysBrain: Human Egocentric Data as a Bridge from Vision Language Models to Physical Intelligence
Figure 2 for PhysBrain: Human Egocentric Data as a Bridge from Vision Language Models to Physical Intelligence
Figure 3 for PhysBrain: Human Egocentric Data as a Bridge from Vision Language Models to Physical Intelligence
Figure 4 for PhysBrain: Human Egocentric Data as a Bridge from Vision Language Models to Physical Intelligence
Viaarxiv icon

PLoP: Precise LoRA Placement for Efficient Finetuning of Large Models

Add code
Jun 25, 2025
Figure 1 for PLoP: Precise LoRA Placement for Efficient Finetuning of Large Models
Figure 2 for PLoP: Precise LoRA Placement for Efficient Finetuning of Large Models
Figure 3 for PLoP: Precise LoRA Placement for Efficient Finetuning of Large Models
Figure 4 for PLoP: Precise LoRA Placement for Efficient Finetuning of Large Models
Viaarxiv icon

Local MDI+: Local Feature Importances for Tree-Based Models

Add code
Jun 10, 2025
Viaarxiv icon

CDR-Agent: Intelligent Selection and Execution of Clinical Decision Rules Using Large Language Model Agents

Add code
May 29, 2025
Viaarxiv icon

ProxySPEX: Inference-Efficient Interpretability via Sparse Feature Interactions in LLMs

Add code
May 23, 2025
Figure 1 for ProxySPEX: Inference-Efficient Interpretability via Sparse Feature Interactions in LLMs
Figure 2 for ProxySPEX: Inference-Efficient Interpretability via Sparse Feature Interactions in LLMs
Figure 3 for ProxySPEX: Inference-Efficient Interpretability via Sparse Feature Interactions in LLMs
Figure 4 for ProxySPEX: Inference-Efficient Interpretability via Sparse Feature Interactions in LLMs
Viaarxiv icon

Not All Tokens Are What You Need In Thinking

Add code
May 23, 2025
Viaarxiv icon