Reported by New Intelligence
Reported by New Intelligence
Source: IEEE Spectrum
Editor: Jin Lei
【New Intelligence Overview】DeepFake has recently become a hot topic, raising numerous concerns. In response to the potential negative impacts of DeepFake, researchers have developed a magical tool based on neural networks that can identify the authenticity of DeepFake images.
The nemesis of DeepFake is here!
Since the advent of DeepFake, its ability to create fake images and videos has reached astonishing levels, leading people to exclaim: “We can no longer trust our own eyes.” The moral and legal implications arising from this are evident.
To tackle this phenomenon, researchers from the University of California, Riverside, have recently proposed a neural network-based tool that can determine the authenticity of photos in no time!
Paper link:
https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8626149
The initial tests of the algorithm show an accuracy rate between 71% and 95%, depending on the sample dataset used, from unmodified images to single pixel-level fake images.
The algorithm has not yet been extended to include detection of deepfake videos.
Professor Amit Roy-Chowdhury from the University of California, Riverside, states:
“DeepFake refers to modified images or videos that can insert or delete parts of the content, altering the original meaning of the image.”
Identifying the authenticity of DeepFake poses a challenge in research becauseit is manipulated in ways that are indistinguishable to the human eye.
In the current context of rapid economic and social development, whether in humanitarian efforts, product launches, or election campaigns, DeepFake images and videos can lead to significant misinterpretations of events.
Imagine if DeepFake technology were maliciously used widely; we might frequently see a political candidate accused of violent crimes or a CEO admitting to safety issues with their company’s products, disrupting societal stability.
Chowdhury, one of the five authors of this research, states:
This detection algorithm could be a powerful tool against new threats in the social media era. However, we must be cautious not to rely too heavily on these algorithms. Overly trusted detection algorithms could be weaponized by those trying to spread false information.
I believe we must handle anything related to AI and machine learning with care, as we need to understand that the results provided by these systems are probabilistic. And these probabilities are often far below 0.98 or 0.99. We should not accept them blindly.
In this sense, DeepFake represents a new domain in cybersecurity, which is an ongoing arms race where both “good guys” and “bad guys” continue to evolve.
In this study, the team combines existing concepts from current literature in a novel and powerful way.
One component of this algorithm is a recurrent neural network, which divides the images in question into small patches and then examines these patches pixel by pixel.
Algorithm structure framework
Neural networks have been trained on thousands of deepfake and real images, allowing them to learn how to highlight forgery traces at the pixel level.
Roy-Chowdhury mentions that the boundaries around the altered portions of images often contain traces of manipulation. When an object is inserted into an image, the boundary regions typically exhibit certain characteristics.
Those who intentionally alter images will often smooth out the edges of the object excessively to prevent the algorithm from extracting features.
The other part of the algorithm transmits the entire image through a series of encoding filters. Mathematically, these filters allow the algorithm to consider the overall situation of the image at a larger and more comprehensive level.
The algorithm then compares the results outputted pixel by pixel with the analysis results from the higher-level encoding filters. When these parallel analyses trigger red flags in the same area of the image, it will be marked as “possibly DeepFake.”
If an image of a bird is pasted onto an image that originally only had branches (as shown below).
In this case, the pixel-by-pixel algorithm might flag the pixels around the bird’s claws as “problematic.” Meanwhile, the encoder algorithm might detect boundary issues or anomalies at a larger scale.
As long as both neural networks mark the same area around the bird in the image, Roy-Chowdhury’s team’s algorithm will classify the image of the bird and branches as “possibly DeepFake.”
With the ability to identify the authenticity of DeepFake images, the next step is to tackle videos.
Roy-Chowdhury indicates that the algorithm now needs to be expanded and applied to videos. This algorithm may need to consider how images change frame by frame and whether any detectable patterns can be recognized from these changes.
Given the urgency of DeepFake detection and the increasing number of malicious individuals worldwide attempting to exploit this misinformation, Roy-Chowdhury calls on other researchers to utilize their algorithm for detection in more realistic environments.
Paper:
https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8626149
Blog:
https://spectrum.ieee.org/tech-talk/computing/software/a-twotrack-algorithm-to-detect-deepfake-images
Further reading:
Huawei’s 2 million annual salary for fresh PhDs: A deep dive into 8 genius students’ school majors