Calibration Evaluation of Deepfake Detection Models

Reference

Zhang Pengpeng, Song Zongze, Peng Bo, et al. Calibration Evaluation of Models Oriented to Deepfake Detection[J]. Journal of Cybersecurity, 2023, 1(3): 97-106.

Background

With the development of deepfake technology, its positive role in entertainment and cultural exchange industries coexists with potential threats of cyber attacks. Currently, deepfake content on social networks mainly focuses on “face swapping,” and the generated fake face images have gradually reached a level where it is difficult to distinguish between real and fake. Although existing deep learning models have achieved a high level of predictive accuracy in deepfake detection tasks, the reliability of predictive confidence—i.e., the calibration of the model—still needs to be verified and improved. This paper discusses the calibration of the Weakly Supervised Data Augmentation Network (WS-DAN)[1] under different test data conditions and compares the effects of two calibration enhancement methods: Monte Carlo Dropout[2] and Deep Ensembles[3].

Calibration Evaluation of Deepfake Detection Models
Calibration Evaluation of Deepfake Detection Models

Figure 1 Model Detection Flowchart

Innovations

The core innovation of this paper lies in exploring the application of two classic calibration enhancement methods, Monte Carlo Dropout and Deep Ensembles, in the task of deepfake detection. In response to the current situation where existing detection methods fail to evaluate the calibration of model output confidence, this paper further introduces Monte Carlo Dropout and Deep Ensembles methods to calibrate network outputs, thereby making the confidence of model outputs more reliable.

Calibration Evaluation of Deepfake Detection Models

Experiments

Considering the completeness of the Attention mechanism in WS-DAN, this paper applies the Dropout layer before the last fully connected layer in the Monte Carlo Dropout method, setting the Dropout probability to 0.5 during both training and testing. To simplify the problem, the Deep Ensembles method used in this paper integrates multiple models of the same structure without adding adversarial samples to obtain the distribution of detection results.

The experiments were conducted on three datasets: DFDC[4], Celeb-DF[5], and Faceforensic++[6]. The average precision (AP) metric was used to evaluate the predictive accuracy of the model, while log loss and expected calibration error (ECE) metrics were used to assess the calibration of the model predictions. The experimental results indicate that WS-DAN combined with EfficientNet-b3[7] and Xception[8] shows an upward trend in both accuracy and calibration. Particularly, the Deep Ensembles method effectively improves the model’s calibration as the number of integrated networks increases.

Calibration Evaluation of Deepfake Detection Models

Table 1 Comparison of Different Methods on DFDC Dataset

Calibration Evaluation of Deepfake Detection Models

Table 2 Comparison of Different Methods on Celeb-DF Dataset

Calibration Evaluation of Deepfake Detection Models

Table 3 Comparison of Different Methods on FF++(HQ) Deepfakes Dataset

Calibration Evaluation of Deepfake Detection Models

Table 4 Comparison of Different Methods on FF++(HQ) Face2Face Dataset

Calibration Evaluation of Deepfake Detection Models

Table 5 Comparison of Different Methods on FF++(HQ) FaceSwap Dataset

Calibration Evaluation of Deepfake Detection Models

Table 6 Comparison of Different Methods on FF++(HQ) Neural Textures Dataset

Calibration Evaluation of Deepfake Detection Models

To further investigate the improvement of model prediction performance using the Deep Ensembles method, experiments were conducted using the WS-DAN model with the Xception network for feature extraction, and the Deep Ensembles method was evaluated on different datasets. The confidence of each test sample was calculated, and the classification accuracy and sample proportion above a given confidence threshold were statistically analyzed. The experimental results are shown in Figure 2(a), (b). The results indicate that models using this method exhibit improved classification performance as the confidence threshold increases across different datasets, suggesting that in practical applications, confidence calculation can be used to determine the trustworthiness of the model’s predictions.

Calibration Evaluation of Deepfake Detection Models
Calibration Evaluation of Deepfake Detection Models
Calibration Evaluation of Deepfake Detection Models

(a) (b)

Figure 2 Relationship Between Model Accuracy, Sample Proportion, and Confidence Under Deep Ensembles Method Across Different Datasets

Conclusion

This paper first verifies the effectiveness of the weakly supervised data augmentation module for model detection, where the Attention Cropping and Attention Dropping modules enable the model to focus on the detailed information of the input images, significantly enhancing both the accuracy and calibration of the detection results; then compares the enhancement effects of Monte Carlo Dropout and Deep Ensembles methods on model calibration. Experimental results demonstrate that the Deep Ensembles method strengthens model performance as the number of integrated networks increases, and it effectively reduces the occurrence of high-confidence but incorrect predictions. Additionally, models utilizing this method can appropriately lower prediction confidence when encountering samples from unknown distributions, cautiously discerning samples, thereby effectively improving model calibration.

Calibration Evaluation of Deepfake Detection Models

Download Full Paper

1. Scan the QR code below; 2. Click on “Read the Original” at the end of the article.

Calibration Evaluation of Deepfake Detection Models

Source: Journal of Cybersecurity, Issue 3

Calibration Evaluation of Deepfake Detection Models
Journal of Cybersecurity supervised by China Aerospace Science and Technology Corporation, hosted by the China Academy of Space Technology, bimonthly publication, publicly distributed domestically and internationally (CN 10-1901/TP, ISSN 2097-3136). The purpose of the journal is to “build an academic research exchange platform in the field of cyberspace security, disseminate academic thoughts and theories, showcase scientific research, innovative technologies, and application results, support the construction of cyberspace security disciplines, and provide solid support and services for building a strong cyber nation.”

Calibration Evaluation of Deepfake Detection Models

Website: www.journalofcybersec.com

Phone: 010-89061756/ 89061778

Email: [email protected]

Leave a Comment