Abstract:TopoEdge is a topology-grounded, edge-deployable framework for end-to-end software-defined networking (SDN) configuration generation and repair, motivated by the brittleness of configuration artefacts under topology variation and by strict operational constraints on latency, privacy, and on-site execution. TopoEdge represents each target topology as a router-level graph and embeds it using a contrastively trained graph neural network (GNN), enabling nearest-neighbour retrieval of a verified reference configuration paired with an executable Python driver (a Topotest/pytest test script that orchestrates the emulated network and checks protocol assertions). The target topology, retrieved reference topology, and reference driver are assembled into a topology-grounded retrieval-augmented generation context (TopoRAG), which grounds a distributed, execution-centric generate--verify--repair loop coordinated by a central controller and realised by three role-specialised agents: (i) a Planning agent that produces a topology-consistent configuration plan and a per-device skeleton; (ii) a Generation agent that materialises executable configuration artefacts, including device configurations and the driver; and (iii) a Verification agent that runs the FRRouting Topotest/pytest harness, compresses failures into a compact trace, and emits localised patch directives for iterative repair.
Abstract:We present a governance-aware hybrid fine-tuning framework for multilingual, low-resource adaptation of large language models. The core algorithm combines gradient-aligned low-rank updates with structured orthogonal transformations through layer-wise mixing and introduces unitary constraints in selected sub-layers to stabilize deep optimization. In tandem with lightweight, label-free data governance steps, including language identification, near-duplicate removal, and quality filtering, the framework targets accuracy, calibration, and cross-language parity under tight compute budgets. Across XNLI and FLORES, the hybrid approach delivers consistent gains over strong PEFT baselines while maintaining directional balance and improving probability calibration, as shown in Tables II and III. It is more resilient to lightweight orthographic variants, as shown in Table IV, and benefits additively from simple governance steps, as shown in Table V. Training footprint measurements indicate modest overhead and a favorable cost-quality frontier, as shown in Table VI and Figure 2. Together, these results show that hybrid and unitary PEFT provide a stable and accessible path to resource-efficient multilingual adaptation when paired with practical data governance.




Abstract:Fine-tuning large language models (LLMs) remains a computational bottleneck due to their scale and memory demands. This paper presents a comprehensive evaluation of parameter-efficient fine-tuning (PEFT) techniques, including LoRA, BOFT, LoRA-GA, and uRNN, and introduces a novel hybrid strategy that dynamically integrates BOFT's orthogonal stability with LoRA-GA's gradient-aligned rapid convergence. By computing per-layer adaptive updates guided by gradient norms, the hybrid method achieves superior convergence efficiency and generalization across diverse tasks. We also explore, for the first time, the adaptation of unitary RNN (uRNN) principles to transformer-based LLMs, enhancing gradient stability through structured unitary constraints. Empirical evaluations on four benchmarks -- GLUE, GSM8K, MT-Bench, and HumanEval -- using models ranging from 7B to 405B parameters demonstrate that our hybrid method consistently outperforms individual PEFT baselines, approaching full fine-tuning accuracy while reducing resource consumption by up to 2.1 times in training time and 50 percent in memory usage. These findings establish the hybrid approach as a practical and scalable fine-tuning solution for real-world deployment of LLMs under resource constraints.