Alert button
Picture for Chuanyao Zhang

Chuanyao Zhang

Alert button

Machine Unlearning Methodology base on Stochastic Teacher Network

Aug 28, 2023
Xulong Zhang, Jianzong Wang, Ning Cheng, Yifu Sun, Chuanyao Zhang, Jing Xiao

Figure 1 for Machine Unlearning Methodology base on Stochastic Teacher Network
Figure 2 for Machine Unlearning Methodology base on Stochastic Teacher Network
Figure 3 for Machine Unlearning Methodology base on Stochastic Teacher Network
Figure 4 for Machine Unlearning Methodology base on Stochastic Teacher Network

The rise of the phenomenon of the "right to be forgotten" has prompted research on machine unlearning, which grants data owners the right to actively withdraw data that has been used for model training, and requires the elimination of the contribution of that data to the model. A simple method to achieve this is to use the remaining data to retrain the model, but this is not acceptable for other data owners who continue to participate in training. Existing machine unlearning methods have been found to be ineffective in quickly removing knowledge from deep learning models. This paper proposes using a stochastic network as a teacher to expedite the mitigation of the influence caused by forgotten data on the model. We performed experiments on three datasets, and the findings demonstrate that our approach can efficiently mitigate the influence of target data on the model within a single epoch. This allows for one-time erasure and reconstruction of the model, and the reconstruction model achieves the same performance as the retrained model.

* Accepted by 19th International Conference on Advanced Data Mining and Applications. (ADMA 2023) 
Viaarxiv icon

Variational Information Bottleneck for Effective Low-resource Audio Classification

Jul 10, 2021
Shijing Si, Jianzong Wang, Huiming Sun, Jianhan Wu, Chuanyao Zhang, Xiaoyang Qu, Ning Cheng, Lei Chen, Jing Xiao

Figure 1 for Variational Information Bottleneck for Effective Low-resource Audio Classification
Figure 2 for Variational Information Bottleneck for Effective Low-resource Audio Classification
Figure 3 for Variational Information Bottleneck for Effective Low-resource Audio Classification
Figure 4 for Variational Information Bottleneck for Effective Low-resource Audio Classification

Large-scale deep neural networks (DNNs) such as convolutional neural networks (CNNs) have achieved impressive performance in audio classification for their powerful capacity and strong generalization ability. However, when training a DNN model on low-resource tasks, it is usually prone to overfitting the small data and learning too much redundant information. To address this issue, we propose to use variational information bottleneck (VIB) to mitigate overfitting and suppress irrelevant information. In this work, we conduct experiments ona 4-layer CNN. However, the VIB framework is ready-to-use and could be easily utilized with many other state-of-the-art network architectures. Evaluation on a few audio datasets shows that our approach significantly outperforms baseline methods, yielding more than 5.0% improvement in terms of classification accuracy in some low-source settings.

* Accepted by InterSpeech 2021 
Viaarxiv icon