Abstract:Monitoring feeding behaviour is a relevant task for efficient herd management and the effective use of available resources in grazing cattle. The ability to automatically recognise animals' feeding activities through the identification of specific jaw movements allows for the improvement of diet formulation, as well as early detection of metabolic problems and symptoms of animal discomfort, among other benefits. The use of sensors to obtain signals for such monitoring has become popular in the last two decades. The most frequently employed sensors include accelerometers, microphones, and cameras, each with its own set of advantages and drawbacks. An unexplored aspect is the simultaneous use of multiple sensors with the aim of combining signals in order to enhance the precision of the estimations. In this direction, this work introduces a deep neural network based on the fusion of acoustic and inertial signals, composed of convolutional, recurrent, and dense layers. The main advantage of this model is the combination of signals through the automatic extraction of features independently from each of them. The model has emerged from an exploration and comparison of different neural network architectures proposed in this work, which carry out information fusion at different levels. Feature-level fusion has outperformed data and decision-level fusion by at least a 0.14 based on the F1-score metric. Moreover, a comparison with state-of-the-art machine learning methods is presented, including traditional and deep learning approaches. The proposed model yielded an F1-score value of 0.802, representing a 14% increase compared to previous methods. Finally, results from an ablation study and post-training quantization evaluation are also reported.
Abstract:In this work, we propose a time-varying wave-shape extraction algorithm based on a modified version of the adaptive non-harmonic model for non-stationary signals. The model codifies the time-varying wave-shape information in the relative amplitude and phase of the harmonic components of the wave-shape. The algorithm was validated on both real and synthetic signals for the tasks of denoising, decomposition and adaptive segmentation. For the denoising task, both monocomponent and multicomponent synthetic signals were considered. In both cases, the proposed algorithm can accurately recover the time-varying wave-shape of non-stationary signals, even in the presence of high levels of noise, outperforming existing wave-shape estimation algorithms and denoising methods based on short-time Fourier transform thresholding. The denoising of an electroencephalograph signal was also performed, giving similar results. For decomposition, our proposal was able to recover the composing waveforms more accurately by considering the time variations from the harmonic amplitude functions when compared to existing methods. Finally, the algorithm was used for the adaptive segmentation of synthetic signals and an electrocardiograph of a patient undergoing ventricular fibrillation.