Continuous signal representations are naturally suited for inverse problems, such as magnetic resonance imaging (MRI) and computed tomography, because the measurements depend on an underlying physically continuous signal. While classical methods rely on predefined analytical bases like B-splines, implicit neural representations (INRs) have emerged as a powerful alternative that use coordinate-based networks to parameterize continuous functions with implicitly defined bases. Despite their empirical success, direct comparisons of their intrinsic representation capabilities with conventional models remain limited. This preliminary empirical study compares a positional-encoded INR with a cubic B-spline model for continuous function fitting from sparse random samples, isolating the representation capacity difference by only using coefficient-domain Tikhonov regularization. Results demonstrate that, under oracle hyperparameter selection, the INR achieves a lower normalized root-mean-squared error, yielding sharper edge transitions and fewer oscillatory artifacts than the oracle-tuned B-spline model. Additionally, we show that a practical bilevel optimization framework for INR hyperparameter selection based on measurement data split effectively approximates oracle performance. These findings empirically support the superior representation capacity of INRs for sparse data fitting.