This report elaborates on approximations for the discrete Fourier transform by means of replacing the exact Cooley-Tukey algorithm twiddle-factors by low-complexity integers, such as $0, \pm \frac{1}{2}, \pm 1$.
This paper introduces an adaptive filtering process based on shrinking wavelet coefficients from the corresponding signal wavelet representation. The filtering procedure considers a threshold method determined by an iterative algorithm inspired by the control charts application, which is a tool of the statistical process control (SPC). The proposed method, called SpcShrink, is able to discriminate wavelet coefficients that significantly represent the signal of interest. The SpcShrink is algorithmically presented and numerically evaluated according to Monte Carlo simulations. Two empirical applications to real biomedical data filtering are also included and discussed. The SpcShrink shows superior performance when compared with competing algorithms.
In this paper, we introduce low-complexity multidimensional discrete cosine transform (DCT) approximations. Three dimensional DCT (3D DCT) approximations are formalized in terms of high-order tensor theory. The formulation is extended to higher dimensions with arbitrary lengths. Several multiplierless $8\times 8\times 8$ approximate methods are proposed and the computational complexity is discussed for the general multidimensional case. The proposed methods complexity cost was assessed, presenting considerably lower arithmetic operations when compared with the exact 3D DCT. The proposed approximations were embedded into 3D DCT-based video coding scheme and a modified quantization step was introduced. The simulation results showed that the approximate 3D DCT coding methods offer almost identical output visual quality when compared with exact 3D DCT scheme. The proposed 3D approximations were also employed as a tool for visual tracking. The approximate 3D DCT-based proposed system performs similarly to the original exact 3D DCT-based method. In general, the suggested methods showed competitive performance at a considerably lower computational cost.
Two-dimensional (2-D) autoregressive moving average (ARMA) models are commonly applied to describe real-world image data, usually assuming Gaussian or symmetric noise. However, real-world data often present non-Gaussian signals, with asymmetrical distributions and strictly positive values. In particular, SAR images are known to be well characterized by the Rayleigh distribution. In this context, the ARMA model tailored for 2-D Rayleigh-distributed data is introduced -- the 2-D RARMA model. The 2-D RARMA model is derived and conditional likelihood inferences are discussed. The proposed model was submitted to extensive Monte Carlo simulations to evaluate the performance of the conditional maximum likelihood estimators. Moreover, in the context of SAR image processing, two comprehensive numerical experiments were performed comparing anomaly detection and image modeling results of the proposed model with traditional 2-D ARMA models and competing methods in the literature.
The Rayleigh regression model was recently proposed for modeling amplitude values of synthetic aperture radar (SAR) image pixels. However, inferences from such model are based on the maximum likelihood estimators, which can be biased for small signal lengths. The Rayleigh regression model for SAR images often takes into account small pixel windows, which may lead to inaccurate results. In this letter, we introduce bias-adjusted estimators tailored for the Rayleigh regression model based on: (i) the Cox and Snell's method; (ii) the Firth's scheme; and (iii) the parametric bootstrap method. We present numerical experiments considering synthetic and actual SAR data sets. The bias-adjusted estimators yield nearly unbiased estimates and accurate modeling results.
The presence of outliers (anomalous values) in synthetic aperture radar (SAR) data and the misspecification in statistical image models may result in inaccurate inferences. To avoid such issues, the Rayleigh regression model based on a robust estimation process is proposed as a more realistic approach to model this type of data. This paper aims at obtaining Rayleigh regression model parameter estimators robust to the presence of outliers. The proposed approach considered the weighted maximum likelihood method and was submitted to numerical experiments using simulated and measured SAR images. Monte Carlo simulations were employed for the numerical assessment of the proposed robust estimator performance in finite signal lengths, their sensitivity to outliers, and the breakdown point. For instance, the non-robust estimators show a relative bias value $65$-fold larger than the results provided by the robust approach in corrupted signals. In terms of sensitivity analysis and break down point, the robust scheme resulted in a reduction of about $96\%$ and $10\%$, respectively, in the mean absolute value of both measures, in compassion to the non-robust estimators. Moreover, two SAR data sets were used to compare the ground type and anomaly detection results of the proposed robust scheme with competing methods in the literature.
This paper proposes the beta binomial autoregressive moving average model (BBARMA) for modeling quantized amplitude data and bounded count data. The BBARMA model estimates the conditional mean of a beta binomial distributed variable observed over the time by a dynamic structure including: (i) autoregressive and moving average terms; (ii) a set of regressors; and (iii) a link function. Besides introducing the new model, we develop parameter estimation, detection tools, an out-of-signal forecasting scheme, and diagnostic measures. In particular, we provide closed-form expressions for the conditional score vector and the conditional information matrix. The proposed model was submitted to extensive Monte Carlo simulations in order to evaluate the performance of the conditional maximum likelihood estimators and of the proposed detector. The derived detector outperforms the usual ARMA- and Gaussian-based detectors for sinusoidal signal detection. We also presented an experiment for modeling and forecasting the monthly number of rainy days in Recife, Brazil.
In this paper, we present an approach for minimizing the computational complexity of trained Convolutional Neural Networks (ConvNet). The idea is to approximate all elements of a given ConvNet and replace the original convolutional filters and parameters (pooling and bias coefficients; and activation function) with efficient approximations capable of extreme reductions in computational complexity. Low-complexity convolution filters are obtained through a binary (zero-one) linear programming scheme based on the Frobenius norm over sets of dyadic rationals. The resulting matrices allow for multiplication-free computations requiring only addition and bit-shifting operations. Such low-complexity structures pave the way for low-power, efficient hardware designs. We applied our approach on three use cases of different complexity: (i) a "light" but efficient ConvNet for face detection (with around 1000 parameters); (ii) another one for hand-written digit classification (with more than 180000 parameters); and (iii) a significantly larger ConvNet: AlexNet with $\approx$1.2 million matrices. We evaluated the overall performance on the respective tasks for different levels of approximations. In all considered applications, very low-complexity approximations have been derived maintaining an almost equal classification performance.
This paper introduced a matrix parametrization method based on the Loeffler discrete cosine transform (DCT) algorithm. As a result, a new class of eight-point DCT approximations was proposed, capable of unifying the mathematical formalism of several eight-point DCT approximations archived in the literature. Pareto-efficient DCT approximations are obtained through multicriteria optimization, where computational complexity, proximity, and coding performance are considered. Efficient approximations and their scaled 16- and 32-point versions are embedded into image and video encoders, including a JPEG-like codec and H.264/AVC and H.265/HEVC standards. Results are compared to the unmodified standard codecs. Efficient approximations are mapped and implemented on a Xilinx VLX240T FPGA and evaluated for area, speed, and power consumption.
In this paper, two 8-point multiplication-free DCT approximations based on the Chen's factorization are proposed and their fast algorithms are also derived. Both transformations are assessed in terms of computational cost, error energy, and coding gain. Experiments with a JPEG-like image compression scheme are performed and results are compared with competing methods. The proposed low-complexity transforms are scaled according to Jridi-Alfalou-Meher algorithm to effect 16- and 32-point approximations. The new sets of transformations are embedded into an HEVC reference software to provide a fully HEVC-compliant video coding scheme. We show that approximate transforms can outperform traditional transforms and state-of-the-art methods at a very low complexity cost.