The utilization of distributed arithmetic (DA) in AF algorithms has gained significant attention in recent years due to its potential to enhance computational efficiency and reduce resource requirements. This paper presents an exploration of the application of DA to adaptive filtering (AF) algorithms, analyzing trends, discussing challenges, and outlining future prospects. It begins by providing an overview of both DA and AF algorithms, highlighting their individual merits and established applications. Subsequently, the integration of DA into AF algorithms is explored, showcasing its ability to optimize multiply-accumulate operations and mitigate the computational burden associated with AF algorithms. Throughout the paper, the critical trends observed in the field are discussed, including advancements in DA-based hardware architectures. Moreover, the challenges encountered in implementing DA-based AF is also discussed. The continued evolution of DA techniques to cater to the demands of modern AF applications, including real-time processing, resource-constrained environments, and high-dimensional data streams is anticipated. In conclusion, this paper consolidates the current state of applying DA to AF algorithms, offering insights into prevailing trends, discussing challenges, and presenting future research and development in the field. The fusion of these two domains holds promise for achieving improved computational efficiency, reduced hardware complexity, and enhanced performance in various signal processing applications.
In high sample-rate applications of the least-mean-square (LMS) adaptive filtering algorithm, pipelining or/and block processing is required. In this paper, a stochastic analysis of the delayed block LMS algorithm is presented. As opposed to earlier work, pipelining and block processing are jointly considered and extensively examined. Different analyses for the steady and transient states to estimate the step-size bound, adaptation accuracy and adaptation speed based on the recursive relation of delayed block excess mean square error (MSE) are presented. The effect of different amounts of pipelining delays and block sizes on the adaptation accuracy and speed of the adaptive filter with different filter taps and speed-ups are studied. It is concluded that for a constant speed-up, a large delay and small block size lead to a slower convergence rate compared to a small delay and large block size with almost the same steady-state MSE. Monte Carlo simulations indicate a fairly good agreement with the proposed estimates for Gaussian inputs.