Picture for Boyi Liu

Boyi Liu

OpenNavMap: Structure-Free Topometric Mapping via Large-Scale Collaborative Localization

Add code
Jan 18, 2026
Viaarxiv icon

CAFEDistill: Learning Personalized and Dynamic Models through Federated Early-Exit Network Distillation

Add code
Jan 15, 2026
Viaarxiv icon

The Starlink Robot: A Platform and Dataset for Mobile Satellite Communication

Add code
Jun 24, 2025
Viaarxiv icon

Graph-Reward-SQL: Execution-Free Reinforcement Learning for Text-to-SQL via Graph Matching and Stepwise Reward

Add code
May 18, 2025
Viaarxiv icon

Follow Everything: A Leader-Following and Obstacle Avoidance Framework with Goal-Aware Adaptation

Add code
May 01, 2025
Figure 1 for Follow Everything: A Leader-Following and Obstacle Avoidance Framework with Goal-Aware Adaptation
Figure 2 for Follow Everything: A Leader-Following and Obstacle Avoidance Framework with Goal-Aware Adaptation
Figure 3 for Follow Everything: A Leader-Following and Obstacle Avoidance Framework with Goal-Aware Adaptation
Figure 4 for Follow Everything: A Leader-Following and Obstacle Avoidance Framework with Goal-Aware Adaptation
Viaarxiv icon

NavG: Risk-Aware Navigation in Crowded Environments Based on Reinforcement Learning with Guidance Points

Add code
Mar 03, 2025
Figure 1 for NavG: Risk-Aware Navigation in Crowded Environments Based on Reinforcement Learning with Guidance Points
Figure 2 for NavG: Risk-Aware Navigation in Crowded Environments Based on Reinforcement Learning with Guidance Points
Figure 3 for NavG: Risk-Aware Navigation in Crowded Environments Based on Reinforcement Learning with Guidance Points
Figure 4 for NavG: Risk-Aware Navigation in Crowded Environments Based on Reinforcement Learning with Guidance Points
Viaarxiv icon

BRiTE: Bootstrapping Reinforced Thinking Process to Enhance Language Model Reasoning

Add code
Jan 31, 2025
Figure 1 for BRiTE: Bootstrapping Reinforced Thinking Process to Enhance Language Model Reasoning
Figure 2 for BRiTE: Bootstrapping Reinforced Thinking Process to Enhance Language Model Reasoning
Figure 3 for BRiTE: Bootstrapping Reinforced Thinking Process to Enhance Language Model Reasoning
Figure 4 for BRiTE: Bootstrapping Reinforced Thinking Process to Enhance Language Model Reasoning
Viaarxiv icon

Seed-CTS: Unleashing the Power of Tree Search for Superior Performance in Competitive Coding Tasks

Add code
Dec 17, 2024
Viaarxiv icon

DSTC: Direct Preference Learning with Only Self-Generated Tests and Code to Improve Code LMs

Add code
Nov 20, 2024
Figure 1 for DSTC: Direct Preference Learning with Only Self-Generated Tests and Code to Improve Code LMs
Figure 2 for DSTC: Direct Preference Learning with Only Self-Generated Tests and Code to Improve Code LMs
Figure 3 for DSTC: Direct Preference Learning with Only Self-Generated Tests and Code to Improve Code LMs
Figure 4 for DSTC: Direct Preference Learning with Only Self-Generated Tests and Code to Improve Code LMs
Viaarxiv icon

Reward-Augmented Data Enhances Direct Preference Alignment of LLMs

Add code
Oct 10, 2024
Figure 1 for Reward-Augmented Data Enhances Direct Preference Alignment of LLMs
Figure 2 for Reward-Augmented Data Enhances Direct Preference Alignment of LLMs
Figure 3 for Reward-Augmented Data Enhances Direct Preference Alignment of LLMs
Figure 4 for Reward-Augmented Data Enhances Direct Preference Alignment of LLMs
Viaarxiv icon