Alert button
Picture for Michael Bradley

Michael Bradley

Alert button

A Realistic Fish-Habitat Dataset to Evaluate Algorithms for Underwater Visual Analysis

Aug 28, 2020
Alzayat Saleh, Issam H. Laradji, Dmitry A. Konovalov, Michael Bradley, David Vazquez, Marcus Sheaves

Figure 1 for A Realistic Fish-Habitat Dataset to Evaluate Algorithms for Underwater Visual Analysis
Figure 2 for A Realistic Fish-Habitat Dataset to Evaluate Algorithms for Underwater Visual Analysis
Figure 3 for A Realistic Fish-Habitat Dataset to Evaluate Algorithms for Underwater Visual Analysis
Figure 4 for A Realistic Fish-Habitat Dataset to Evaluate Algorithms for Underwater Visual Analysis

Visual analysis of complex fish habitats is an important step towards sustainable fisheries for human consumption and environmental protection. Deep Learning methods have shown great promise for scene analysis when trained on large-scale datasets. However, current datasets for fish analysis tend to focus on the classification task within constrained, plain environments which do not capture the complexity of underwater fish habitats. To address this limitation, we present DeepFish as a benchmark suite with a large-scale dataset to train and test methods for several computer vision tasks. The dataset consists of approximately 40 thousand images collected underwater from 20 \green{habitats in the} marine-environments of tropical Australia. The dataset originally contained only classification labels. Thus, we collected point-level and segmentation labels to have a more comprehensive fish analysis benchmark. These labels enable models to learn to automatically monitor fish count, identify their locations, and estimate their sizes. Our experiments provide an in-depth analysis of the dataset characteristics, and the performance evaluation of several state-of-the-art approaches based on our benchmark. Although models pre-trained on ImageNet have successfully performed on this benchmark, there is still room for improvement. Therefore, this benchmark serves as a testbed to motivate further development in this challenging domain of underwater computer vision. Code is available at: https://github.com/alzayats/DeepFish

* 10 pages, 5 figures, 3 tables, Accepted for Publication in Scientific Reports (Nature) 14 August 2020 
Viaarxiv icon

Underwater Fish Detection with Weak Multi-Domain Supervision

May 26, 2019
Dmitry A. Konovalov, Alzayat Saleh, Michael Bradley, Mangalam Sankupellay, Simone Marini, Marcus Sheaves

Figure 1 for Underwater Fish Detection with Weak Multi-Domain Supervision
Figure 2 for Underwater Fish Detection with Weak Multi-Domain Supervision
Figure 3 for Underwater Fish Detection with Weak Multi-Domain Supervision
Figure 4 for Underwater Fish Detection with Weak Multi-Domain Supervision

Given a sufficiently large training dataset, it is relatively easy to train a modern convolution neural network (CNN) as a required image classifier. However, for the task of fish classification and/or fish detection, if a CNN was trained to detect or classify particular fish species in particular background habitats, the same CNN exhibits much lower accuracy when applied to new/unseen fish species and/or fish habitats. Therefore, in practice, the CNN needs to be continuously fine-tuned to improve its classification accuracy to handle new project-specific fish species or habitats. In this work we present a labelling-efficient method of training a CNN-based fish-detector (the Xception CNN was used as the base) on relatively small numbers (4,000) of project-domain underwater fish/no-fish images from 20 different habitats. Additionally, 17,000 of known negative (that is, missing fish) general-domain (VOC2012) above-water images were used. Two publicly available fish-domain datasets supplied additional 27,000 of above-water and underwater positive/fish images. By using this multi-domain collection of images, the trained Xception-based binary (fish/not-fish) classifier achieved 0.17% false-positives and 0.61% false-negatives on the project's 20,000 negative and 16,000 positive holdout test images, respectively. The area under the ROC curve (AUC) was 99.94%.

* Accepted for the 2019 International Joint Conference on Neural Networks (IJCNN-2019), Budapest, Hungary, July 14-19, 2019, https://www.ijcnn.org/ 
Viaarxiv icon