The concurrent optimization of language models and instructional prompts presents a significant challenge for deploying efficient and effective AI systems, particularly when balancing performance against computational costs like token usage. This paper introduces and assesses a bi-objective evolutionary search engine designed to navigate this complex space, focusing specifically on Small Language Models (SLMs). We employ the NSGA-II algorithm and prompt grammar to simultaneously optimize for task accuracy and token efficiency across some reasoning tasks. Our results successfully identify diverse, high-performing model-prompt combinations, quantitatively revealing the critical trade-off between the two objectives. This research highlights task-specific affinities between particular SLMs and prompt structures (e.g., instructions, context, chain of thought). The generated practical Pareto fronts offer decision-makers a portfolio of optimized solutions adaptable to their specific constraints. This automated approach moves beyond traditional manual tuning, providing a foundational framework for discovering effective human-AI interaction patterns.