Shanghai Jiaotong University
Abstract:AI agents deployed as persistent assistants must maintain correct beliefs as their information environment evolves. In practice, evidence is scattered across heterogeneous sources that often contradict one another, new information can invalidate earlier conclusions, and user preferences surface through corrections rather than explicit instructions. Existing benchmarks largely assume static, single-authority settings and do not evaluate whether agents can keep up with this complexity. We introduce ClawArena, a benchmark for evaluating AI agents in evolving information environments. Each scenario maintains a complete hidden ground truth while exposing the agent only to noisy, partial, and sometimes contradictory traces across multi-channel sessions, workspace files, and staged updates. Evaluation is organized around three coupled challenges: multi-source conflict reasoning, dynamic belief revision, and implicit personalization, whose interactions yield a 14-category question taxonomy. Two question formats, multi-choice (set-selection) and shell-based executable checks, test both reasoning and workspace grounding. The current release contains 64 scenarios across 8 professional domains, totaling 1{,}879 evaluation rounds and 365 dynamic updates. Experiments on five agent frameworks and five language models show that both model capability (15.4% range) and framework design (9.2%) substantially affect performance, that self-evolving skill frameworks can partially close model-capability gaps, and that belief revision difficulty is determined by update design strategy rather than the mere presence of updates. Code is available at https://github.com/aiming-lab/ClawArena.
Abstract:Large language model (LLM) agents are increasingly used for complex tasks, yet deployed agents often remain static, failing to adapt as user needs evolve. This creates a tension between the need for continuous service and the necessity of updating capabilities to match shifting task distributions. On platforms like OpenClaw, which handle diverse workloads across 20+ channels, existing methods either store raw trajectories without distilling knowledge, maintain static skill libraries, or require disruptive downtime for retraining. We present MetaClaw, a continual meta-learning framework that jointly evolves a base LLM policy and a library of reusable behavioral skills. MetaClaw employs two complementary mechanisms. Skill-driven fast adaptation analyzes failure trajectories via an LLM evolver to synthesize new skills, enabling immediate improvement with zero downtime. Opportunistic policy optimization performs gradient-based updates via cloud LoRA fine-tuning and Reinforcement Learning with a Process Reward Model (RL-PRM). This is triggered during user-inactive windows by the Opportunistic Meta-Learning Scheduler (OMLS), which monitors system inactivity and calendar data. These mechanisms are mutually reinforcing: a refined policy generates better trajectories for skill synthesis, while richer skills provide higher-quality data for policy optimization. To prevent data contamination, a versioning mechanism separates support and query data. Built on a proxy-based architecture, MetaClaw scales to production-size LLMs without local GPUs. Experiments on MetaClaw-Bench and AutoResearchClaw show that skill-driven adaptation improves accuracy by up to 32% relative. The full pipeline advances Kimi-K2.5 accuracy from 21.4% to 40.6% and increases composite robustness by 18.3%. Code is available at https://github.com/aiming-lab/MetaClaw.




Abstract:Obtaining reliable position estimation is fundamental for unmanned aerial vehicles during mission execution, especially in harsh environments. But environmental interference and abrupt changes usually degrade measurement reliability, leading to estimation divergence. To address this, existing works explore adaptive adjustment of sensor confidence. Unfortunately, existing methods typically lack synchronous evaluation of estimation precision, thereby rendering adjustments sensitive to abnormal data and susceptible to divergence. To tackle this issue, we propose a novel ternary-channel adaptive evolving estimator equipped with an online error monitor, where the ternary channels, states, noise covariance matrices and especially aerial drag, evolve simultaneously with environment. Firstly, an augmented filter is employed to pre-processes multidimensional data, followed by an inverse-Wishart smoother utilized to obtain posterior states and covariance matrices. Error propagation relation during estimation is analysed and hence an indicator is devised for online monitoring estimation errors. Under this premise, several restrictions are applied to suppress potential divergence led by interference. Additionally, considering motion dynamics, aerial drag matrix is reformulated based on updated states and covariance matrices. Finally, the observability, numerical sensitivity and arithmetic complexity of the proposed estimator are mathematically analyzed. Extensive experiments are conducted in both common and harsh environments (with average RMSE 0.17m and 0.39m respectively) to verify adaptability of algorithm and effectiveness of restriction design, which shows our method significantly outperforms the state-of-the-art.