Many signal processing applications such as acoustic echo cancellation and wireless channel estimation require identifying systems where only a small fraction of coefficients are actually active, i.e. sparse systems. Zero-attracting adaptive filters tackle this by adding a penalty that pulls inactive coefficients toward zero, speeding up convergence. However, these algorithms determine which coefficients to penalize based solely on their current size. This creates a problem during early adaptation since active coefficients that should eventually grow large start out small, making them look identical to truly inactive coefficients. The algorithm ends up applying strong penalties to the very coefficients it needs to develop, slowing down the initial convergence. This paper provides a solution to this problem by introducing a dual-domain approach that looks at coefficients from two perspectives simultaneously. Beyond just tracking coefficient magnitude, we introduce an error-memory vector that monitors how persistently each coefficient contributes to the adaptation error over time. If a coefficient keeps showing up in the error signal, it is probably active even if it is still small. By combining both views, the proposed dual-domain sparse adaptive filter (DD-SAF) can identify active coefficients early and eliminate penalties accordingly. Moreover, complete theoretical analysis is derived. The analysis shows that DD-SAF maintains the same stability properties as standard least-mean-square (LMS) while achieves provably better steady-state performance than existing methods. Simulations demonstrate that the DD-SAF converges to the steady-state faster and/or convergences to a lower mean-square-deviation (MSD) than the standard LMS and the reweighted zero-attracting LMS (RZA-LMS) algorithms for sparse system identification settings.