Abstract:This paper presents a thoroughly automated method for identifying and interpreting cuneiform characters via advanced deep-learning algorithms. Five distinct deep-learning models were trained on a comprehensive dataset of cuneiform characters and evaluated according to critical performance metrics, including accuracy and precision. Two models demonstrated outstanding performance and were subsequently assessed using cuneiform symbols from the Hammurabi law acquisition, notably Hammurabi Law 1. Each model effectively recognized the relevant Akkadian meanings of the symbols and delivered precise English translations. Future work will investigate ensemble and stacking approaches to optimize performance, utilizing hybrid architectures to improve detection accuracy and reliability. This research explores the linguistic relationships between Akkadian, an ancient Mesopotamian language, and Arabic, emphasizing their historical and cultural linkages. This study demonstrates the capability of deep learning to decipher ancient scripts by merging computational linguistics with archaeology, therefore providing significant insights for the comprehension and conservation of human history.
Abstract:Precision farming relies on accurate vegetation monitoring to enhance crop productivity and promote sustainable agricultural practices. This study presents a comprehensive evaluation of UAV-based imaging for vegetation health assessment in a palm tree cultivation region in Dubai. By comparing multispectral and RGB image data, we demonstrate that RGBbased vegetation indices offer performance comparable to more expensive multispectral indices, providing a cost-effective alternative for large-scale agricultural monitoring. Using UAVs equipped with multispectral sensors, indices such as NDVI and SAVI were computed to categorize vegetation into healthy, moderate, and stressed conditions. Simultaneously, RGB-based indices like VARI and MGRVI delivered similar results in vegetation classification and stress detection. Our findings highlight the practical benefits of integrating RGB imagery into precision farming, reducing operational costs while maintaining accuracy in plant health monitoring. This research underscores the potential of UAVbased RGB imaging as a powerful tool for precision agriculture, enabling broader adoption of data-driven decision-making in crop management. By leveraging the strengths of both multispectral and RGB imaging, this work advances the state of UAV applications in agriculture, paving the way for more efficient and scalable farming solutions.
Abstract:Falconry, an ancient practice of training and hunting with falcons, emphasizes the need for vigilant health monitoring to ensure the well-being of these highly valued birds, especially during hunting activities. This research paper introduces a cutting-edge approach, which leverages the power of Concatenated ConvNeXt and EfficientNet AI models for falcon disease classification. Focused on distinguishing 'Normal,' 'Liver,' and 'Aspergillosis' cases, the study employs a comprehensive dataset for model training and evaluation, utilizing metrics such as accuracy, precision, recall, and f1-score. Through rigorous experimentation and evaluation, we demonstrate the superior performance of the concatenated AI model compared to traditional methods and standalone architectures. This novel approach contributes to accurate falcon disease classification, laying the groundwork for further advancements in avian veterinary AI applications.
Abstract:Many incurable diseases prevalent across global societies stem from various influences, including lifestyle choices, economic conditions, social factors, and genetics. Research predominantly focuses on these diseases due to their widespread nature, aiming to decrease mortality, enhance treatment options, and improve healthcare standards. Among these, kidney disease stands out as a particularly severe condition affecting men and women worldwide. Nonetheless, there is a pressing need for continued research into innovative, early diagnostic methods to develop more effective treatments for such diseases. Recently, automatic diagnosis of Kidney Cancer has become an important challenge especially when using deep learning (DL) due to the importance of training medical datasets, which in most cases are difficult and expensive to obtain. Furthermore, in most cases, algorithms require data from the same domain and a powerful computer with efficient storage capacity. To overcome this issue, a new type of learning known as transfer learning (TL) has been proposed that can produce impressive results based on other different pre-trained data. This paper presents, to the best of the authors' knowledge, the first comprehensive survey of DL-based TL frameworks for kidney cancer diagnosis. This is a strong contribution to help researchers understand the current challenges and perspectives of this topic. Hence, the main limitations and advantages of each framework are identified and detailed critical analyses are provided. Looking ahead, the article identifies promising directions for future research. Moving on, the discussion is concluded by reflecting on the pivotal role of TL in the development of precision medicine and its effects on clinical practice and research in oncology.
Abstract:Polarimetric synthetic aperture radar (PolSAR) images encompass valuable information that can facilitate extensive land cover interpretation and generate diverse output products. Extracting meaningful features from PolSAR data poses challenges distinct from those encountered in optical imagery. Deep learning (DL) methods offer effective solutions for overcoming these challenges in PolSAR feature extraction. Convolutional neural networks (CNNs) play a crucial role in capturing PolSAR image characteristics by leveraging kernel capabilities to consider local information and the complex-valued nature of PolSAR data. In this study, a novel three-branch fusion of complex-valued CNN, named the Shallow to Deep Feature Fusion Network (SDF2Net), is proposed for PolSAR image classification. To validate the performance of the proposed method, classification results are compared against multiple state-of-the-art approaches using the airborne synthetic aperture radar (AIRSAR) datasets of Flevoland and San Francisco, as well as the ESAR Oberpfaffenhofen dataset. The results indicate that the proposed approach demonstrates improvements in overallaccuracy, with a 1.3% and 0.8% enhancement for the AIRSAR datasets and a 0.5% improvement for the ESAR dataset. Analyses conducted on the Flevoland data underscore the effectiveness of the SDF2Net model, revealing a promising overall accuracy of 96.01% even with only a 1% sampling ratio.