



Abstract:The proliferation of Deep Neural Networks (DNN) in commercial applications is expanding rapidly. Simultaneously, the increasing complexity and cost of training DNN models have intensified the urgency surrounding the protection of intellectual property associated with these trained models. In this regard, DNN watermarking has emerged as a crucial safeguarding technique. This paper presents FedReverse, a novel multiparty reversible watermarking approach for robust copyright protection while minimizing performance impact. Unlike existing methods, FedReverse enables collaborative watermark embedding from multiple parties after model training, ensuring individual copyright claims. In addition, FedReverse is reversible, enabling complete watermark removal with unanimous client consent. FedReverse demonstrates perfect covering, ensuring that observations of watermarked content do not reveal any information about the hidden watermark. Additionally, it showcases resistance against Known Original Attacks (KOA), making it highly challenging for attackers to forge watermarks or infer the key. This paper further evaluates FedReverse through comprehensive simulations involving Multi-layer Perceptron (MLP) and Convolutional Neural Networks (CNN) trained on the MNIST dataset. The simulations demonstrate FedReverse's robustness, reversibility, and minimal impact on model accuracy across varying embedding parameters and multiple client scenarios.
Abstract:Static deep neural network (DNN) watermarking embeds watermarks into the weights of DNN model by irreversible methods, but this will cause permanent damage to watermarked model and can not meet the requirements of integrity authentication. For these reasons, reversible data hiding (RDH) seems more attractive for the copyright protection of DNNs. This paper proposes a novel RDH-based static DNN watermarking method by improving the non-reversible quantization index modulation (QIM). Targeting the floating-point weights of DNNs, the idea of our RDH method is to add a scaled quantization error back to the cover object. Two schemes are designed to realize the integrity protection and legitimate authentication of DNNs. Simulation results on training loss and classification accuracy justify the superior feasibility, effectiveness and adaptability of the proposed method over histogram shifting (HS).