Abstract:Synthetic image source attribution is an open challenge, with an increasing number of image generators being released yearly. The complexity and the sheer number of available generative techniques, as well as the scarcity of high-quality open source datasets of diverse nature for this task, make training and benchmarking synthetic image source attribution models very challenging. WILD is a new in-the-Wild Image Linkage Dataset designed to provide a powerful training and benchmarking tool for synthetic image attribution models. The dataset is built out of a closed set of 10 popular commercial generators, which constitutes the training base of attribution models, and an open set of 10 additional generators, simulating a real-world in-the-wild scenario. Each generator is represented by 1,000 images, for a total of 10,000 images in the closed set and 10,000 images in the open set. Half of the images are post-processed with a wide range of operators. WILD allows benchmarking attribution models in a wide range of tasks, including closed and open set identification and verification, and robust attribution with respect to post-processing and adversarial attacks. Models trained on WILD are expected to benefit from the challenging scenario represented by the dataset itself. Moreover, an assessment of seven baseline methodologies on closed and open set attribution is presented, including robustness tests with respect to post-processing.
Abstract:{The study of frequency components derived from Discrete Cosine Transform (DCT) has been widely used in image analysis. In recent years it has been observed that significant information can be extrapolated from them about the lifecycle of the image, but no study has focused on the analysis between them and the source resolution of the image. In this work, we investigated a novel image resolution classifier that employs DCT statistics with the goal to detect the original resolution of images; in particular the insight was exploited to address the challenge of identifying cropped images. Training a Machine Learning (ML) classifier on entire images (not cropped), the generated model can leverage this information to detect cropping. The results demonstrate the classifier's reliability in distinguishing between cropped and not cropped images, providing a dependable estimation of their original resolution. This advancement has significant implications for image processing applications, including digital security, authenticity verification, and visual quality analysis, by offering a new tool for detecting image manipulations and enhancing qualitative image assessment. This work opens new perspectives in the field, with potential to transform image analysis and usage across multiple domains.}