Abstract:The VMAF (video multi-method assessment fusion) metric for image and video coding recently gained more and more popularity as it is supposed to have a high correlation with human perception. This makes training and particularly fine-tuning of machine-learned codecs on this metric interesting. However, VMAF is shown to be attackable in a way that, e.g., unsharpening an image can lead to a gain in VMAF quality while decreasing the quality in human perception. A particular version of VMAF called VMAF NEG has been designed to be more robust against such attacks and therefore it should be more useful for fine-tuning of codecs. In this paper, our contributions are threefold. First, we identify and analyze the still existing vulnerability of VMAF NEG towards attacks, particulary towards the attack that consists in employing VMAF NEG for image codec fine-tuning. Second, to benefit from VMAF NEG's high correlation with human perception, we propose a robust loss including VMAF NEG for fine-tuning either the encoder or the decoder. Third, we support our quantitative objective results by providing perceptive impressions of some image examples.