Network reliability assessment is pivotal for ensuring the robustness of modern infrastructure systems, from power grids to communication networks. While exact reliability computation for binary-state networks is NP-hard, existing approximation methods face critical tradeoffs between accuracy, scalability, and data efficiency. This study evaluates 20 machine learning methods across three reliability regimes full range (0.0-1.0), high reliability (0.9-1.0), and ultra high reliability (0.99-1.0) to address these gaps. We demonstrate that large-scale networks with arc reliability larger than or equal to 0.9 exhibit near-unity system reliability, enabling computational simplifications. Further, we establish a dataset-scale-driven paradigm for algorithm selection: Artificial Neural Networks (ANN) excel with limited data, while Polynomial Regression (PR) achieves superior accuracy in data-rich environments. Our findings reveal ANN's Test-MSE of 7.24E-05 at 30,000 samples and PR's optimal performance (5.61E-05) at 40,000 samples, outperforming traditional Monte Carlo simulations. These insights provide actionable guidelines for balancing accuracy, interpretability, and computational efficiency in reliability engineering, with implications for infrastructure resilience and system optimization.