Alert button
Picture for Shen Chen

Shen Chen

Alert button

Contrastive Pseudo Learning for Open-World DeepFake Attribution

Sep 20, 2023
Zhimin Sun, Shen Chen, Taiping Yao, Bangjie Yin, Ran Yi, Shouhong Ding, Lizhuang Ma

Figure 1 for Contrastive Pseudo Learning for Open-World DeepFake Attribution
Figure 2 for Contrastive Pseudo Learning for Open-World DeepFake Attribution
Figure 3 for Contrastive Pseudo Learning for Open-World DeepFake Attribution
Figure 4 for Contrastive Pseudo Learning for Open-World DeepFake Attribution

The challenge in sourcing attribution for forgery faces has gained widespread attention due to the rapid development of generative techniques. While many recent works have taken essential steps on GAN-generated faces, more threatening attacks related to identity swapping or expression transferring are still overlooked. And the forgery traces hidden in unknown attacks from the open-world unlabeled faces still remain under-explored. To push the related frontier research, we introduce a new benchmark called Open-World DeepFake Attribution (OW-DFA), which aims to evaluate attribution performance against various types of fake faces under open-world scenarios. Meanwhile, we propose a novel framework named Contrastive Pseudo Learning (CPL) for the OW-DFA task through 1) introducing a Global-Local Voting module to guide the feature alignment of forged faces with different manipulated regions, 2) designing a Confidence-based Soft Pseudo-label strategy to mitigate the pseudo-noise caused by similar methods in unlabeled set. In addition, we extend the CPL framework with a multi-stage paradigm that leverages pre-train technique and iterative learning to further enhance traceability performance. Extensive experiments verify the superiority of our proposed method on the OW-DFA and also demonstrate the interpretability of deepfake attribution task and its impact on improving the security of deepfake detection area.

* 16 pages, 7 figures, ICCV 2023 
Viaarxiv icon

Continual Face Forgery Detection via Historical Distribution Preserving

Aug 11, 2023
Ke Sun, Shen Chen, Taiping Yao, Xiaoshuai Sun, Shouhong Ding, Rongrong Ji

Figure 1 for Continual Face Forgery Detection via Historical Distribution Preserving
Figure 2 for Continual Face Forgery Detection via Historical Distribution Preserving
Figure 3 for Continual Face Forgery Detection via Historical Distribution Preserving
Figure 4 for Continual Face Forgery Detection via Historical Distribution Preserving

Face forgery techniques have advanced rapidly and pose serious security threats. Existing face forgery detection methods try to learn generalizable features, but they still fall short of practical application. Additionally, finetuning these methods on historical training data is resource-intensive in terms of time and storage. In this paper, we focus on a novel and challenging problem: Continual Face Forgery Detection (CFFD), which aims to efficiently learn from new forgery attacks without forgetting previous ones. Specifically, we propose a Historical Distribution Preserving (HDP) framework that reserves and preserves the distributions of historical faces. To achieve this, we use universal adversarial perturbation (UAP) to simulate historical forgery distribution, and knowledge distillation to maintain the distribution variation of real faces across different models. We also construct a new benchmark for CFFD with three evaluation protocols. Our extensive experiments on the benchmarks show that our method outperforms the state-of-the-art competitors.

Viaarxiv icon

Towards General Visual-Linguistic Face Forgery Detection

Jul 31, 2023
Ke Sun, Shen Chen, Taiping Yao, Xiaoshuai Sun, Shouhong Ding, Rongrong Ji

Figure 1 for Towards General Visual-Linguistic Face Forgery Detection
Figure 2 for Towards General Visual-Linguistic Face Forgery Detection
Figure 3 for Towards General Visual-Linguistic Face Forgery Detection
Figure 4 for Towards General Visual-Linguistic Face Forgery Detection

Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust. Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model. We argue that such supervisions lack semantic information and interpretability. To address this issues, in this paper, we propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation. Since text annotations are not available in current deepfakes datasets, VLFFD first generates the mixed forgery image with corresponding fine-grained prompts via Prompt Forgery Image Generator (PFIG). Then, the fine-grained mixed data and coarse-grained original data and is jointly trained with the Coarse-and-Fine Co-training framework (C2F), enabling the model to gain more generalization and interpretability. The experiments show the proposed method improves the existing detection models on several challenging benchmarks.

Viaarxiv icon

Artificial Intelligence Security Competition (AISC)

Dec 07, 2022
Yinpeng Dong, Peng Chen, Senyou Deng, Lianji L, Yi Sun, Hanyu Zhao, Jiaxing Li, Yunteng Tan, Xinyu Liu, Yangyi Dong, Enhui Xu, Jincai Xu, Shu Xu, Xuelin Fu, Changfeng Sun, Haoliang Han, Xuchong Zhang, Shen Chen, Zhimin Sun, Junyi Cao, Taiping Yao, Shouhong Ding, Yu Wu, Jian Lin, Tianpeng Wu, Ye Wang, Yu Fu, Lin Feng, Kangkang Gao, Zeyu Liu, Yuanzhe Pang, Chengqi Duan, Huipeng Zhou, Yajie Wang, Yuhang Zhao, Shangbo Wu, Haoran Lyu, Zhiyu Lin, Yifei Gao, Shuang Li, Haonan Wang, Jitao Sang, Chen Ma, Junhao Zheng, Yijia Li, Chao Shen, Chenhao Lin, Zhichao Cui, Guoshuai Liu, Huafeng Shi, Kun Hu, Mengxin Zhang

Figure 1 for Artificial Intelligence Security Competition (AISC)
Figure 2 for Artificial Intelligence Security Competition (AISC)
Figure 3 for Artificial Intelligence Security Competition (AISC)
Figure 4 for Artificial Intelligence Security Competition (AISC)

The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.

* Technical report of AISC 
Viaarxiv icon

Exploiting Fine-grained Face Forgery Clues via Progressive Enhancement Learning

Dec 28, 2021
Qiqi Gu, Shen Chen, Taiping Yao, Yang Chen, Shouhong Ding, Ran Yi

Figure 1 for Exploiting Fine-grained Face Forgery Clues via Progressive Enhancement Learning
Figure 2 for Exploiting Fine-grained Face Forgery Clues via Progressive Enhancement Learning
Figure 3 for Exploiting Fine-grained Face Forgery Clues via Progressive Enhancement Learning
Figure 4 for Exploiting Fine-grained Face Forgery Clues via Progressive Enhancement Learning

With the rapid development of facial forgery techniques, forgery detection has attracted more and more attention due to security concerns. Existing approaches attempt to use frequency information to mine subtle artifacts under high-quality forged faces. However, the exploitation of frequency information is coarse-grained, and more importantly, their vanilla learning process struggles to extract fine-grained forgery traces. To address this issue, we propose a progressive enhancement learning framework to exploit both the RGB and fine-grained frequency clues. Specifically, we perform a fine-grained decomposition of RGB images to completely decouple the real and fake traces in the frequency space. Subsequently, we propose a progressive enhancement learning framework based on a two-branch network, combined with self-enhancement and mutual-enhancement modules. The self-enhancement module captures the traces in different input spaces based on spatial noise enhancement and channel attention. The Mutual-enhancement module concurrently enhances RGB and frequency features by communicating in the shared spatial dimension. The progressive enhancement process facilitates the learning of discriminative features with fine-grained face forgery clues. Extensive experiments on several datasets show that our method outperforms the state-of-the-art face forgery detection methods.

Viaarxiv icon

Dual Contrastive Learning for General Face Forgery Detection

Dec 27, 2021
Ke Sun, Taiping Yao, Shen Chen, Shouhong Ding, Jilin L, Rongrong Ji

Figure 1 for Dual Contrastive Learning for General Face Forgery Detection
Figure 2 for Dual Contrastive Learning for General Face Forgery Detection
Figure 3 for Dual Contrastive Learning for General Face Forgery Detection
Figure 4 for Dual Contrastive Learning for General Face Forgery Detection

With various facial manipulation techniques arising, face forgery detection has drawn growing attention due to security concerns. Previous works always formulate face forgery detection as a classification problem based on cross-entropy loss, which emphasizes category-level differences rather than the essential discrepancies between real and fake faces, limiting model generalization in unseen domains. To address this issue, we propose a novel face forgery detection framework, named Dual Contrastive Learning (DCL), which specially constructs positive and negative paired data and performs designed contrastive learning at different granularities to learn generalized feature representation. Concretely, combined with the hard sample selection strategy, Inter-Instance Contrastive Learning (Inter-ICL) is first proposed to promote task-related discriminative features learning by especially constructing instance pairs. Moreover, to further explore the essential discrepancies, Intra-Instance Contrastive Learning (Intra-ICL) is introduced to focus on the local content inconsistencies prevalent in the forged faces by constructing local-region pairs inside instances. Extensive experiments and visualizations on several datasets demonstrate the generalization of our method against the state-of-the-art competitors.

* This paper was accepted by AAAI 2022 Conference on Artificial Intelligence 
Viaarxiv icon

Local Relation Learning for Face Forgery Detection

May 06, 2021
Shen Chen, Taiping Yao, Yang Chen, Shouhong Ding, Jilin Li, Rongrong Ji

Figure 1 for Local Relation Learning for Face Forgery Detection
Figure 2 for Local Relation Learning for Face Forgery Detection
Figure 3 for Local Relation Learning for Face Forgery Detection
Figure 4 for Local Relation Learning for Face Forgery Detection

With the rapid development of facial manipulation techniques, face forgery detection has received considerable attention in digital media forensics due to security concerns. Most existing methods formulate face forgery detection as a classification problem and utilize binary labels or manipulated region masks as supervision. However, without considering the correlation between local regions, these global supervisions are insufficient to learn a generalized feature and prone to overfitting. To address this issue, we propose a novel perspective of face forgery detection via local relation learning. Specifically, we propose a Multi-scale Patch Similarity Module (MPSM), which measures the similarity between features of local regions and forms a robust and generalized similarity pattern. Moreover, we propose an RGB-Frequency Attention Module (RFAM) to fuse information in both RGB and frequency domains for more comprehensive local feature representation, which further improves the reliability of the similarity pattern. Extensive experiments show that the proposed method consistently outperforms the state-of-the-arts on widely-used benchmarks. Furthermore, detailed visualization shows the robustness and interpretability of our method.

* 8 pages, 6 figures, Accepted by AAAI2021 
Viaarxiv icon

DeeperForensics Challenge 2020 on Real-World Face Forgery Detection: Methods and Results

Feb 18, 2021
Liming Jiang, Zhengkui Guo, Wayne Wu, Zhaoyang Liu, Ziwei Liu, Chen Change Loy, Shuo Yang, Yuanjun Xiong, Wei Xia, Baoying Chen, Peiyu Zhuang, Sili Li, Shen Chen, Taiping Yao, Shouhong Ding, Jilin Li, Feiyue Huang, Liujuan Cao, Rongrong Ji, Changlei Lu, Ganchao Tan

Figure 1 for DeeperForensics Challenge 2020 on Real-World Face Forgery Detection: Methods and Results
Figure 2 for DeeperForensics Challenge 2020 on Real-World Face Forgery Detection: Methods and Results
Figure 3 for DeeperForensics Challenge 2020 on Real-World Face Forgery Detection: Methods and Results
Figure 4 for DeeperForensics Challenge 2020 on Real-World Face Forgery Detection: Methods and Results

This paper reports methods and results in the DeeperForensics Challenge 2020 on real-world face forgery detection. The challenge employs the DeeperForensics-1.0 dataset, one of the most extensive publicly available real-world face forgery detection datasets, with 60,000 videos constituted by a total of 17.6 million frames. The model evaluation is conducted online on a high-quality hidden test set with multiple sources and diverse distortions. A total of 115 participants registered for the competition, and 25 teams made valid submissions. We will summarize the winning solutions and present some discussions on potential research directions.

* Technical report. Challenge website: https://competitions.codalab.org/competitions/25228 
Viaarxiv icon

Multi-stream Convolutional Neural Network with Frequency Selection for Robust Speaker Verification

Jan 12, 2021
Wei Yao, Shen Chen, Jiamin Cui, Yaolin Lou

Figure 1 for Multi-stream Convolutional Neural Network with Frequency Selection for Robust Speaker Verification
Figure 2 for Multi-stream Convolutional Neural Network with Frequency Selection for Robust Speaker Verification
Figure 3 for Multi-stream Convolutional Neural Network with Frequency Selection for Robust Speaker Verification
Figure 4 for Multi-stream Convolutional Neural Network with Frequency Selection for Robust Speaker Verification

Speaker verification aims to verify whether an input speech corresponds to the claimed speaker, and conventionally, this kind of system is deployed based on single-stream scenario, wherein the feature extractor operates in full frequency range. In this paper, we hypothesize that machine can learn enough knowledge to do classification task when listening to partial frequency range instead of full frequency range, which is so called frequency selection technique, and further propose a novel framework of multi-stream Convolutional Neural Network (CNN) with this technique for speaker verification tasks. The proposed framework accommodates diverse temporal embeddings generated from multiple streams to enhance the robustness of acoustic modeling. For the diversity of temporal embeddings, we consider feature augmentation with frequency selection, which is to manually segment the full-band of frequency into several sub-bands, and the feature extractor of each stream can select which sub-bands to use as target frequency domain. Different from conventional single-stream solution wherein each utterance would only be processed for one time, in this framework, there are multiple streams processing it in parallel. The input utterance for each stream is pre-processed by a frequency selector within specified frequency range, and post-processed by mean normalization. The normalized temporal embeddings of each stream will flow into a pooling layer to generate fused embeddings. We conduct extensive experiments on VoxCeleb dataset, and the experimental results demonstrate that multi-stream CNN significantly outperforms single-stream baseline with 20.53 % of relative improvement in minimum Decision Cost Function (minDCF).

* 12 pages, 11 figures, 8 tables 
Viaarxiv icon

Generalized Operating Procedure for Deep Learning: an Unconstrained Optimal Design Perspective

Dec 31, 2020
Shen Chen, Mingwei Zhang, Jiamin Cui, Wei Yao

Figure 1 for Generalized Operating Procedure for Deep Learning: an Unconstrained Optimal Design Perspective
Figure 2 for Generalized Operating Procedure for Deep Learning: an Unconstrained Optimal Design Perspective
Figure 3 for Generalized Operating Procedure for Deep Learning: an Unconstrained Optimal Design Perspective
Figure 4 for Generalized Operating Procedure for Deep Learning: an Unconstrained Optimal Design Perspective

Deep learning (DL) has brought about remarkable breakthrough in processing images, video and speech due to its efficacy in extracting highly abstract representation and learning very complex functions. However, there is seldom operating procedure reported on how to make it for real use cases. In this paper, we intend to address this problem by presenting a generalized operating procedure for DL from the perspective of unconstrained optimal design, which is motivated by a simple intension to remove the barrier of using DL, especially for those scientists or engineers who are new but eager to use it. Our proposed procedure contains seven steps, which are project/problem statement, data collection, architecture design, initialization of parameters, defining loss function, computing optimal parameters, and inference, respectively. Following this procedure, we build a multi-stream end-to-end speaker verification system, in which the input speech utterance is processed by multiple parallel streams within different frequency range, so that the acoustic modeling can be more robust resulting from the diversity of features. Trained with VoxCeleb dataset, our experimental results verify the effectiveness of our proposed operating procedure, and also show that our multi-stream framework outperforms single-stream baseline with 20 % relative reduction in minimum decision cost function (minDCF).

* 5 pages, 4 figures, 1 table 
Viaarxiv icon