Early diagnosis of Alzheimer Diagnostics (AD) is a challenging task due to its subtle and complex clinical symptoms. Deep learning-assisted medical diagnosis using image recognition techniques has become an important research topic in this field. The features have to accurately capture main variations of anatomical brain structures. However, time-consuming is expensive for feature extraction by deep learning training. This study proposes a novel Alzheimer's disease detection model based on Convolutional Neural Networks. The model utilizes a pre-trained ResNet network as the backbone, incorporating post-fusion algorithm for 3D medical images and attention mechanisms. The experimental results indicate that the employed 2D fusion algorithm effectively improves the model's training expense. And the introduced attention mechanism accurately weights important regions in images, further enhancing the model's diagnostic accuracy.
The majority of the research on the quantization of Deep Neural Networks (DNNs) is focused on reducing the precision of tensors visible by high-level frameworks (e.g., weights, activations, and gradients). However, current hardware still relies on high-accuracy core operations. Most significant is the operation of accumulating products. This high-precision accumulation operation is gradually becoming the main computational bottleneck. This is because, so far, the usage of low-precision accumulators led to a significant degradation in performance. In this work, we present a simple method to train and fine-tune high-end DNNs, to allow, for the first time, utilization of cheaper, $12$-bits accumulators, with no significant degradation in accuracy. Lastly, we show that as we decrease the accumulation precision further, using fine-grained gradient approximations can improve the DNN accuracy.
Tracking ripening tomatoes is time consuming and labor intensive. Artificial intelligence technologies combined with those of computer vision can help users optimize the process of monitoring the ripening status of plants. To this end, we have proposed a tomato ripening monitoring approach based on deep learning in complex scenes. The objective is to detect mature tomatoes and harvest them in a timely manner. The proposed approach is declined in two parts. Firstly, the images of the scene are transmitted to the pre-processing layer. This process allows the detection of areas of interest (area of the image containing tomatoes). Then, these images are used as input to the maturity detection layer. This layer, based on a deep neural network learning algorithm, classifies the tomato thumbnails provided to it in one of the following five categories: green, brittle, pink, pale red, mature red. The experiments are based on images collected from the internet gathered through searches using tomato state across diverse languages including English, German, French, and Spanish. The experimental results of the maturity detection layer on a dataset composed of images of tomatoes taken under the extreme conditions, gave a good classification rate.
This paper addresses the beam-selection challenges in Multi-User Multiple Input Multiple Output (MU-MIMO) beamforming for mm-wave and THz channels, focusing on the pivotal aspect of spectral efficiency (SE) and computational efficiency. We introduce a novel approach, the Greedy Interference-Optimized Singular Vector Beam-selection (G-IOSVB) algorithm, which offers a strategic balance between high SE and low computational complexity. Our study embarks on a comparative analysis of G-IOSVB against the traditional IOSVB and the exhaustive Singular-Vector Beamspace Search (SVBS) algorithms. The findings reveal that while SVBS achieves the highest SE, it incurs significant computational costs, approximately 162 seconds per channel realization. In contrast, G-IOSVB aligns closely with IOSVB in SE performance yet is markedly more computationally efficient. Heatmaps vividly demonstrate this efficiency, highlighting G-IOSVB's reduced computation time without sacrificing SE. We also delve into the mathematical intricacies of G-IOSVB, demonstrating its theoretical and practical superiority through rigorous expressions and detailed algorithmic analysis. The numerical results illustrate that G-IOSVB stands out as an efficient, practical solution for MU-MIMO systems, making it a promising candidate for high-speed, high-efficiency wireless communication networks.
One of the common hazards and issues in meteorology and agriculture is the problem of frost, chilling or freezing. This event occurs when the minimum ambient temperature falls below a certain value. This phenomenon causes a lot of damage to the country, especially Fars province. Solving this problem requires that, in addition to predicting the minimum temperature, we can provide enough time to implement the necessary measures. Empirical methods have been provided by the Food and Agriculture Organization (FAO), which can predict the minimum temperature, but not in time. In addition to this, we can use machine learning methods to model the minimum temperature. In this study, we have used three methods Gated Recurrent Unit (GRU), Temporal Convolutional Network (TCN) as deep learning methods, and Gradient Boosting (XGBoost). A customized loss function designed for methods based on deep learning, which can be effective in reducing prediction errors. With methods based on deep learning models, not only do we observe a reduction in RMSE error compared to empirical methods but also have more time to predict minimum temperature. Thus, we can model the minimum temperature for the next 24 hours by having the current 24 hours. With the gradient boosting model (XGBoost) we can keep the prediction time as deep learning and RMSE error reduced. Finally, we experimentally concluded that machine learning methods work better than empirical methods and XGBoost model can have better performance in this problem among other implemented.
Early Exit Neural Networks (EENNs) endow astandard Deep Neural Network (DNN) with Early Exit Classifiers (EECs), to provide predictions at intermediate points of the processing when enough confidence in classification is achieved. This leads to many benefits in terms of effectiveness and efficiency. Currently, the design of EENNs is carried out manually by experts, a complex and time-consuming task that requires accounting for many aspects, including the correct placement, the thresholding, and the computational overhead of the EECs. For this reason, the research is exploring the use of Neural Architecture Search (NAS) to automatize the design of EENNs. Currently, few comprehensive NAS solutions for EENNs have been proposed in the literature, and a fully automated, joint design strategy taking into consideration both the backbone and the EECs remains an open problem. To this end, this work presents Neural Architecture Search for Hardware Constrained Early Exit Neural Networks (NACHOS), the first NAS framework for the design of optimal EENNs satisfying constraints on the accuracy and the number of Multiply and Accumulate (MAC) operations performed by the EENNs at inference time. In particular, this provides the joint design of backbone and EECs to select a set of admissible (i.e., respecting the constraints) Pareto Optimal Solutions in terms of best tradeoff between the accuracy and number of MACs. The results show that the models designed by NACHOS are competitive with the state-of-the-art EENNs. Additionally, this work investigates the effectiveness of two novel regularization terms designed for the optimization of the auxiliary classifiers of the EENN
Enabling multi-fingered robots to grasp and manipulate objects with human-like dexterity is especially challenging during the dynamic, continuous hand-object interactions. Closed-loop feedback control is essential for dexterous hands to dynamically finetune hand poses when performing precise functional grasps. This work proposes an adaptive motion planning method based on deep reinforcement learning to adjust grasping poses according to real-time feedback from joint torques from pre-grasp to goal grasp. We find the multi-joint torques of the dexterous hand can sense object positions through contacts and collisions, enabling real-time adjustment of grasps to generate varying grasping trajectories for objects in different positions. In our experiments, the performance gap with and without force feedback reveals the important role of force feedback in adaptive manipulation. Our approach utilizing force feedback preliminarily exhibits human-like flexibility, adaptability, and precision.
Missingness is ubiquitous in multivariate time series and poses an obstacle to reliable downstream analysis. Although recurrent network imputation achieved the SOTA, existing models do not scale to deep architectures that can potentially alleviate issues arising in complex data. Moreover, imputation carries the risk of biased estimations of the ground truth. Yet, confidence in the imputed values is always unmeasured or computed post hoc from model output. We propose DEep Attention Recurrent Imputation (DEARI), which jointly estimates missing values and their associated uncertainty in heterogeneous multivariate time series. By jointly representing feature-wise correlations and temporal dynamics, we adopt a self attention mechanism, along with an effective residual component, to achieve a deep recurrent neural network with good imputation performance and stable convergence. We also leverage self-supervised metric learning to boost performance by optimizing sample similarity. Finally, we transform DEARI into a Bayesian neural network through a novel Bayesian marginalization strategy to produce stochastic DEARI, which outperforms its deterministic equivalent. Experiments show that DEARI surpasses the SOTA in diverse imputation tasks using real-world datasets, namely air quality control, healthcare and traffic.
Improving energy efficiency in industrial production processes is crucial for competitiveness, and compliance with climate policies. This paper introduces a data-driven approach to identify optimal melting patterns in induction furnaces. Through time-series K-means clustering the melting patterns could be classified into distinct clusters based on temperature profiles. Using the elbow method, 12 clusters were identified, representing the range of melting patterns. Performance parameters such as melting time, energy-specific performance, and carbon cost were established for each cluster, indicating furnace efficiency and environmental impact. Multiple criteria decision-making methods including Simple Additive Weighting, Multiplicative Exponential Weighting, Technique for Order of Preference by Similarity to Ideal Solution, modified TOPSIS, and VlseKriterijumska Optimizacija I Kompromisno Resenje were utilized to determine the best-practice cluster. The study successfully identified the cluster with the best performance. Implementing the best practice operation resulted in an 8.6 % reduction in electricity costs, highlighting the potential energy savings in the foundry.
Since the weather is chaotic, forecasts aim to predict the distribution of future states rather than make a single prediction. Recently, multiple data driven weather models have emerged claiming breakthroughs in skill. However, these have mostly been benchmarked using deterministic skill scores, and little is known about their probabilistic skill. Unfortunately, it is hard to fairly compare AI weather models in a probabilistic sense, since variations in choice of ensemble initialization, definition of state, and noise injection methodology become confounding. Moreover, even obtaining ensemble forecast baselines is a substantial engineering challenge given the data volumes involved. We sidestep both problems by applying a decades-old idea -- lagged ensembles -- whereby an ensemble can be constructed from a moderately-sized library of deterministic forecasts. This allows the first parameter-free intercomparison of leading AI weather models' probabilistic skill against an operational baseline. The results reveal that two leading AI weather models, i.e. GraphCast and Pangu, are tied on the probabilistic CRPS metric even though the former outperforms the latter in deterministic scoring. We also reveal how multiple time-step loss functions, which many data-driven weather models have employed, are counter-productive: they improve deterministic metrics at the cost of increased dissipation, deteriorating probabilistic skill. This is confirmed through ablations applied to a spherical Fourier Neural Operator (SFNO) approach to AI weather forecasting. Separate SFNO ablations modulating effective resolution reveal it has a useful effect on ensemble dispersion relevant to achieving good ensemble calibration. We hope these and forthcoming insights from lagged ensembles can help guide the development of AI weather forecasts and have thus shared the diagnostic code.