Abstract:Deep learning-based perception pipelines in autonomous ground vehicles are vulnerable to both adversarial manipulation and network-layer disruption. We present a systematic, on-hardware experimental evaluation of five attack classes: FGSM, PGD, man-in-the-middle (MitM), denial-of-service (DoS), and phantom attacks on low-cost autonomous vehicle platforms (JetRacer and Yahboom). Using a standardized 13-second experimental protocol and comprehensive automated logging, we systematically characterize three dimensions of attack behavior:(i) control deviation, (ii) computational cost, and (iii) runtime responsiveness. Our analysis reveals that distinct attack classes produce consistent and separable "fingerprints" across these dimensions: perception attacks (MitM output manipulation and phantom projection) generate high steering deviation signatures with nominal computational overhead, PGD produces combined steering perturbation and computational load signatures across multiple dimensions, and DoS exhibits frame rate and latency degradation signatures with minimal control-plane perturbation. We demonstrate that our fingerprinting framework generalizes across both digital attacks (adversarial perturbations, network manipulation) and environmental attacks (projected false features), providing a foundation for attack-aware monitoring systems and targeted, signature-based defense mechanisms.