Nowadays, due to the rapid population expansion, food shortage has become a critical issue. In order to stabilizing the food source production, preventing crops from being attacked by pests is very important. In generally, farmers use pesticides to kill pests, however, improperly using pesticides will also kill some insects which is beneficial to crops, such as bees. If the number of bees is too few, the supplement of food in the world will be in short. Besides, excessive pesticides will seriously pollute the environment. Accordingly, farmers need a machine which can automatically recognize the pests. Recently, deep learning is popular because its effectiveness in the field of image classification. In this paper, we propose a small and efficient model called ExquisiteNet to complete the task of recognizing the pests and we expect to apply our model on mobile devices. ExquisiteNet mainly consists of two blocks. One is double fusion with squeeze-and-excitation-bottleneck block (DFSEB block), and the other is max feature expansion block (ME block). ExquisiteNet only has 0.98M parameters and its computing speed is very fast almost the same as SqueezeNet. In order to evaluate our model's performance, we test our model on a benchmark pest dataset called IP102. Compared to many state-of-the-art models, such as ResNet101, ShuffleNetV2, MobileNetV3-large and EfficientNet etc., our model achieves higher accuracy, that is, 52.32% on the test set of IP102 without any data augmentation.
Early detection of COVID-19 is key in containing the pandemic. Disease detection and evaluation based on imaging is fast and cheap and therefore plays an important role in COVID-19 handling. COVID-19 is easier to detect in chest CT, however, it is expensive, non-portable, and difficult to disinfect, making it unfit as a point-of-care (POC) modality. On the other hand, chest X-ray (CXR) and lung ultrasound (LUS) are widely used, yet, COVID-19 findings in these modalities are not always very clear. Here we train deep neural networks to significantly enhance the capability to detect, grade and monitor COVID-19 patients using CXRs and LUS. Collaborating with several hospitals in Israel we collect a large dataset of CXRs and use this dataset to train a neural network obtaining above 90% detection rate for COVID-19. In addition, in collaboration with ULTRa (Ultrasound Laboratory Trento, Italy) and hospitals in Italy we obtained POC ultrasound data with annotations of the severity of disease and trained a deep network for automatic severity grading.
In many real world scenarios, it is difficult to capture the images in the visible light spectrum (VIS) due to bad lighting conditions. However, the images can be captured in such scenarios using Near-Infrared (NIR) and Thermal (THM) cameras. The NIR and THM images contain the limited details. Thus, there is a need to transform the images from THM/NIR to VIS for better understanding. However, it is non-trivial task due to the large domain discrepancies and lack of abundant datasets. Nowadays, Generative Adversarial Network (GAN) is able to transform the images from one domain to another domain. Most of the available GAN based methods use the combination of the adversarial and the pixel-wise losses (like L1 or L2) as the objective function for training. The quality of transformed images in case of THM/NIR to VIS transformation is still not up to the mark using such objective function. Thus, better objective functions are needed to improve the quality, fine details and realism of the transformed images. A new model for THM/NIR to VIS image transformation called Perceptual Cyclic-Synthesized Generative Adversarial Network (PCSGAN) is introduced to address these issues. The PCSGAN uses the combination of the perceptual (i.e., feature based) losses along with the pixel-wise and the adversarial losses. Both the quantitative and qualitative measures are used to judge the performance of the PCSGAN model over the WHU-IIP face and the RGB-NIR scene datasets. The proposed PCSGAN outperforms the state-of-the-art image transformation models, including Pix2pix, DualGAN, CycleGAN, PS2GAN, and PAN in terms of the SSIM, MSE, PSNR and LPIPS evaluation measures. The code is available at: \url{https://github.com/KishanKancharagunta/PCSGAN}.
Particle Image Velocimetry (PIV) data processing procedures are adversely affected by light reflections and backgrounds as well as defects in the models and sticky particles that occlude the inner walls of the boundaries. In this paper, a novel approach is proposed for decomposition of the PIV data into background/foreground components, greatly reducing the effects of such artifacts. This is achieved by utilizing Robust Principal Component Analysis (RPCA) applied to the data matrix, generated by aggregating the vectorized PIV frames. It is assumed that the data matrix can be decomposed into two statistically different components, a low-rank component depicting the still background and a sparse component representing the moving particles within the imaged geometry. Formulating the assumptions as an optimization problem, Augmented Lagrange Multiplier (ALM) method is used for decomposing the data matrix into the low-rank and sparse components. Experiments and comparisons with the state-of-the-art using several PIV image sequences reveal the superiority of the proposed approach for background removal of PIV data.
Video live streaming is gaining prevalence among video streaming services, especially for the delivery of popular sporting events. Many objective Video Quality Assessment (VQA) models have been developed to predict the perceptual quality of videos. Appropriate databases that exemplify the distortions encountered in live streaming videos are important to designing and learning objective VQA models. Towards making progress in this direction, we built a video quality database specifically designed for live streaming VQA research. The new video database is called the Laboratory for Image and Video Engineering (LIVE) Live stream Database. The LIVE Livestream Database includes 315 videos of 45 contents impaired by 6 types of distortions. We also performed a subjective quality study using the new database, whereby more than 12,000 human opinions were gathered from 40 subjects. We demonstrate the usefulness of the new resource by performing a holistic evaluation of the performance of current state-of-the-art (SOTA) VQA models. The LIVE Livestream database is being made publicly available for these purposes at https://live.ece.utexas.edu/research/LIVE_APV_Study/apv_index.html.
Harmonic inpainting with optimised data is very popular for inpainting-based image compression. We improve this approach in three important aspects. Firstly, we replace the standard finite differences discretisation by a finite element method with triangle elements. This does not only speed up inpainting and data selection, but even improves the reconstruction quality. Secondly, we propose highly efficient algorithms for spatial and tonal data optimisation that are several orders of magnitude faster than state-of-the-art methods. Last but not least, we show that our algorithms also allow working with very large images. This has previously been impractical due to the memory and runtime requirements of prior algorithms.
This paper considers recognizing products from daily photos, which is an important problem in real-world applications but also challenging due to background clutters, category diversities, noisy labels, etc. We address this problem by two contributions. First, we introduce a novel large-scale product image dataset, termed as Product-90. Instead of collecting product images by labor-and time-intensive image capturing, we take advantage of the web and download images from the reviews of several e-commerce websites where the images are casually captured by consumers. Labels are assigned automatically by the categories of e-commerce websites. Totally the Product-90 consists of more than 140K images with 90 categories. Due to the fact that consumers may upload unrelated images, it is inevitable that our Product-90 introduces noisy labels. As the second contribution, we develop a simple yet efficient \textit{guidance learning} (GL) method for training convolutional neural networks (CNNs) with noisy supervision. The GL method first trains an initial teacher network with the full noisy dataset, and then trains a target/student network with both large-scale noisy set and small manually-verified clean set in a multi-task manner. Specifically, in the stage of student network training, the large-scale noisy data is supervised by its guidance knowledge which is the combination of its given noisy label and the soften label from the teacher network. We conduct extensive experiments on our Products-90 and public datasets, namely Food101, Food-101N, and Clothing1M. Our guidance learning method achieves performance superior to state-of-the-art methods on these datasets.
Visual place recognition is a fundamental capability for the localization of mobile robots. It places image retrieval in the practical context of physical agents operating in a physical world. It is an active field of research and many different approaches have been proposed and evaluated in many different experiments. In the following, we argue that due to variations of this practical context and individual design decisions, place recognition experiments are barely comparable across different papers and that there is a variety of properties that can change from one experiment to another. We provide an extensive list of such properties and give examples how they can be used to setup a place recognition experiment easier or harder. This might be interesting for different involved parties: (1) people who just want to select a place recognition approach that is suitable for the properties of their particular task at hand, (2) researchers that look for open research questions and are interested in particularly difficult instances, (3) authors that want to create reproducible papers on this topic, and (4) also reviewers that have the task to identify potential problems in papers under review.
Conventional remote sensing image retrieval (RSIR) systems usually perform single-label retrieval where each image is annotated by a single label representing the most significant semantic content of the image. This assumption, however, ignores the complexity of remote sensing images, where an image might have multiple classes (i.e., multiple labels), thus resulting in worse retrieval performance. We therefore propose a novel multi-label RSIR approach with fully convolutional networks (FCN). In our approach, we first train a FCN model using a pixel-wise labeled dataset,and the trained FCN is then used to predict the segmentation maps of each image in the considered archive. We finally extract region convolutional features of each image based on its segmentation map.The region features can be either used to perform region-based retrieval or further post-processed to obtain a feature vector for similarity measure. The experimental results show that our approach achieves state-of-the-art performance in contrast to conventional single-label and recent multi-label RSIR approaches.
This work addresses the task of multilabel image classification. Inspired by the great success from deep convolutional neural networks (CNNs) for single-label visual-semantic embedding, we exploit extending these models for multilabel images. Specifically, we propose an image-dependent ranking model, which returns a ranked list of labels according to its relevance to the input image. In contrast to conventional CNN models that learn an image representation (i.e. the image embedding vector), the developed model learns a mapping (i.e. a transformation matrix) from an image in an attempt to differentiate between its relevant and irrelevant labels. Despite the conceptual simplicity of our approach, experimental results on a public benchmark dataset demonstrate that the proposed model achieves state-of-the-art performance while using fewer training images than other multilabel classification methods.