Picture for Yixin Chen

Yixin Chen

Round-trip Reinforcement Learning: Self-Consistent Training for Better Chemical LLMs

Add code
Oct 01, 2025
Figure 1 for Round-trip Reinforcement Learning: Self-Consistent Training for Better Chemical LLMs
Figure 2 for Round-trip Reinforcement Learning: Self-Consistent Training for Better Chemical LLMs
Figure 3 for Round-trip Reinforcement Learning: Self-Consistent Training for Better Chemical LLMs
Figure 4 for Round-trip Reinforcement Learning: Self-Consistent Training for Better Chemical LLMs
Viaarxiv icon

Addressing accuracy and hallucination of LLMs in Alzheimer's disease research through knowledge graphs

Add code
Aug 28, 2025
Viaarxiv icon

GWM: Towards Scalable Gaussian World Models for Robotic Manipulation

Add code
Aug 25, 2025
Figure 1 for GWM: Towards Scalable Gaussian World Models for Robotic Manipulation
Figure 2 for GWM: Towards Scalable Gaussian World Models for Robotic Manipulation
Figure 3 for GWM: Towards Scalable Gaussian World Models for Robotic Manipulation
Figure 4 for GWM: Towards Scalable Gaussian World Models for Robotic Manipulation
Viaarxiv icon

Beyond Semantic Similarity: Reducing Unnecessary API Calls via Behavior-Aligned Retriever

Add code
Aug 20, 2025
Viaarxiv icon

A Comprehensive Evaluation framework of Alignment Techniques for LLMs

Add code
Aug 13, 2025
Viaarxiv icon

Spatial-Temporal Multi-Scale Quantization for Flexible Motion Generation

Add code
Aug 12, 2025
Viaarxiv icon

LEO-VL: Towards 3D Vision-Language Generalists via Data Scaling with Efficient Representation

Add code
Jun 11, 2025
Figure 1 for LEO-VL: Towards 3D Vision-Language Generalists via Data Scaling with Efficient Representation
Figure 2 for LEO-VL: Towards 3D Vision-Language Generalists via Data Scaling with Efficient Representation
Figure 3 for LEO-VL: Towards 3D Vision-Language Generalists via Data Scaling with Efficient Representation
Figure 4 for LEO-VL: Towards 3D Vision-Language Generalists via Data Scaling with Efficient Representation
Viaarxiv icon

InteractAnything: Zero-shot Human Object Interaction Synthesis via LLM Feedback and Object Affordance Parsing

Add code
May 30, 2025
Figure 1 for InteractAnything: Zero-shot Human Object Interaction Synthesis via LLM Feedback and Object Affordance Parsing
Figure 2 for InteractAnything: Zero-shot Human Object Interaction Synthesis via LLM Feedback and Object Affordance Parsing
Figure 3 for InteractAnything: Zero-shot Human Object Interaction Synthesis via LLM Feedback and Object Affordance Parsing
Figure 4 for InteractAnything: Zero-shot Human Object Interaction Synthesis via LLM Feedback and Object Affordance Parsing
Viaarxiv icon

Visual Instruction Tuning with Chain of Region-of-Interest

Add code
May 11, 2025
Viaarxiv icon

MetaScenes: Towards Automated Replica Creation for Real-world 3D Scans

Add code
May 05, 2025
Viaarxiv icon