Abstract:Systems based on Large Language Models (LLMs) have become formidable tools for automating research and software production. However, their governance remains a challenge when technical requirements demand absolute consistency, auditability, and predictable control over cost and latency. Recent literature highlights two phenomena that aggravate this scenario: the stochastic variance inherent in the model's judgment (often treated as "systemic noise") and the substantial degradation of context utilization in long inputs, with critical losses when decisive information is diluted in the middle of the prompt. This article proposes PARCER as an engineering response to these limitations. The framework acts as a declarative "operational contract" in YAML, transforming unstructured interactions into versioned and executable artifacts. PARCER imposes strict governance structured into seven operational phases, introducing decision hygiene practices inspired by legal judgments to mitigate noise, adaptive token budgeting, formalized recovery routes (fallbacks) for context preservation, and systemic observability via OpenTelemetry. The objective of this work is to present the conceptual and technical architecture of PARCER, positioning it as a necessary transition from simple "prompt engineering" to "context engineering with governable governance".
Abstract:Autonomous agents based on Large Language Models (LLMs) have evolved from reactive assistants to systems capable of planning, executing actions via tools, and iterating over environment observations. However, they remain vulnerable to structural limitations: lack of native state, context degradation over long horizons, and the gap between probabilistic generation and deterministic execution requirements. This paper presents the ESAA (Event Sourcing for Autonomous Agents) architecture, which separates the agent's cognitive intention from the project's state mutation, inspired by the Event Sourcing pattern. In ESAA, agents emit only structured intentions in validated JSON (agent.result or issue.report); a deterministic orchestrator validates, persists events in an append-only log (activity.jsonl), applies file-writing effects, and projects a verifiable materialized view (roadmap.json). The proposal incorporates boundary contracts (AGENT_CONTRACT.yaml), metaprompting profiles (PARCER), and replay verification with hashing (esaa verify), ensuring the immutability of completed tasks and forensic traceability. Two case studies validate the architecture: (i) a landing page project (9 tasks, 49 events, single-agent composition) and (ii) a clinical dashboard system (50 tasks, 86 events, 4 concurrent agents across 8 phases), both concluding with run.status=success and verify_status=ok. The multi-agent case study demonstrates real concurrent orchestration with heterogeneous LLMs (Claude Sonnet 4.6, Codex GPT-5, Antigravity/Gemini 3 Pro, and Claude Opus 4.6), providing empirical evidence of the architecture's scalability beyond single-agent scenarios.