Addressing real-world optimization challenges requires not only advanced metaheuristics but also continuous refinement of their internal mechanisms. This paper explores the integration of machine learning in the form of neural surrogate models into metaheuristics through a recent lens: energy consumption. While surrogates are widely used to reduce the computational cost of expensive objective functions, their combined impact on energy efficiency, algorithmic performance, and solution accuracy remains largely unquantified. We provide a critical investigation into this intersection, aiming to advance the design of energy-aware, surrogate-assisted search algorithms. Our experiments reveal substantial benefits: employing a state-of-the-art pre-trained surrogate can reduce energy consumption by up to 98\%, execution time by approximately 98%, and memory usage by around 99\%. Moreover, increasing the training dataset size further enhances these gains by lowering the per-use computational cost, while static pre-training versus continuous (iterative) retraining have relatively different advantages depending on whether we aim at time/energy or accuracy and general cost across problems, respectively. Surrogates also have a negative impact on costs and accuracy at times, and then they cannot be blindly adopted. These findings support a more holistic approach to surrogate-assisted optimization, integrating energy with time and predictive accuracy into performance assessments.