The ``black-box'' nature of deep learning models presents a significant barrier to their adoption for scientific discovery, where interpretability is paramount. This challenge is especially pronounced in discovering the governing equations of dynamical processes on networks or graphs, since even their topological structure further affects the processes' behavior. This paper provides a rigorous, comparative assessment of state-of-the-art symbolic regression techniques for this task. We evaluate established methods, including sparse regression and MLP-based architectures, and introduce a novel adaptation of Kolmogorov-Arnold Networks (KANs) for graphs, designed to exploit their inherent interpretability. Across a suite of synthetic and real-world dynamical systems, our results demonstrate that both MLP and KAN-based architectures can successfully identify the underlying symbolic equations, significantly surpassing existing baselines. Critically, we show that KANs achieve this performance with greater parsimony and transparency, as their learnable activation functions provide a clearer mapping to the true physical dynamics. This study offers a practical guide for researchers, clarifying the trade-offs between model expressivity and interpretability, and establishes the viability of neural-based architectures for robust scientific discovery on complex systems.