The exponential proliferation of mobile devices and data-intensive applications in future wireless networks imposes substantial computational burdens on resource-constrained devices, thereby fostering the emergence of over-the-air computation (AirComp) as a transformative paradigm for edge intelligence.} To enhance the efficiency and scalability of AirComp systems, this paper proposes a comprehensive dual-approach framework that systematically transitions from traditional mathematical optimization to deep reinforcement learning (DRL) for resource allocation under execution uncertainty. Specifically, we establish a rigorous system model capturing execution uncertainty via Gamma-distributed computational workloads, resulting in challenging nonlinear optimization problems involving complex Gamma functions. For single-user scenarios, we design advanced block coordinate descent (BCD) and majorization-maximization (MM) algorithms, which yield semi-closed-form solutions with provable performance guarantees. However, conventional optimization approaches become computationally intractable in dynamic multi-user environments due to inter-user interference and resource contention. To this end, we introduce a Deep Q-Network (DQN)-based DRL framework capable of adaptively learning optimal policies through environment interaction. Our dual methodology effectively bridges analytical tractability with adaptive intelligence, leveraging optimization for foundational insight and learning for real-time adaptability. Extensive numerical results corroborate the performance gains achieved via increased edge server density and validate the superiority of our optimization-to-learning paradigm in next-generation AirComp systems.