Abstract:Explaining why the species lives at a particular location is important for understanding ecological systems and conserving biodiversity. However, existing ecological workflows are fragmented and often inaccessible to non-specialists. We propose an end-to-end visual-to-causal framework that transforms a species image into interpretable causal insights about its habitat preference. The system integrates species recognition, global occurrence retrieval, pseudo-absence sampling, and climate data extraction. We then discover causal structures among environmental features and estimate their influence on species occurrence using modern causal inference methods. Finally, we generate statistically grounded, human-readable causal explanations from structured templates and large language models. We demonstrate the framework on a bee and a flower species and report early results as part of an ongoing project, showing the potential of the multimodal AI assistant backed up by a recommended ecological modeling practice for describing species habitat in human-understandable language.
Abstract:We introduce AgriBench, the first agriculture benchmark designed to evaluate MultiModal Large Language Models (MM-LLMs) for agriculture applications. To further address the agriculture knowledge-based dataset limitation problem, we propose MM-LUCAS, a multimodal agriculture dataset, that includes 1,784 landscape images, segmentation masks, depth maps, and detailed annotations (geographical location, country, date, land cover and land use taxonomic details, quality scores, aesthetic scores, etc), based on the Land Use/Cover Area Frame Survey (LUCAS) dataset, which contains comparable statistics on land use and land cover for the European Union (EU) territory. This work presents a groundbreaking perspective in advancing agriculture MM-LLMs and is still in progress, offering valuable insights for future developments and innovations in specific expert knowledge-based MM-LLMs.
Abstract:Object counting is a popular task in deep learning applications in various domains, including agriculture. A conventional deep learning approach requires a large amount of training data, often a logistic problem in a real-world application. To address this issue, we examined how well ChatGPT (GPT4V) and a general-purpose AI (foundation model for object counting, T-Rex) can count the number of fruit bodies (coffee cherries) in 100 images. The foundation model with few-shot learning outperformed the trained YOLOv8 model (R2 = 0.923 and 0.900, respectively). ChatGPT also showed some interesting potential, especially when few-shot learning with human feedback was applied (R2 = 0.360 and 0.460, respectively). Moreover, we examined the time required for implementation as a practical question. Obtaining the results with the foundation model and ChatGPT were much shorter than the YOLOv8 model (0.83 hrs, 1.75 hrs, and 161 hrs). We interpret these results as two surprises for deep learning users in applied domains: a foundation model with few-shot domain-specific learning can drastically save time and effort compared to the conventional approach, and ChatGPT can reveal a relatively good performance. Both approaches do not need coding skills, which can foster AI education and dissemination.