Videos are a powerful source of communication adopted in several contexts and used for both benign and malicious purposes (e.g., education vs. reputation damage). Nowadays, realistic video manipulation strategies like deepfake generators constitute a severe threat to our society
...
Videos are a powerful source of communication adopted in several contexts and used for both benign and malicious purposes (e.g., education vs. reputation damage). Nowadays, realistic video manipulation strategies like deepfake generators constitute a severe threat to our society in term of misinformation. While the wide range of the current research focuses on deepfake detection as a binary task, the identification of a real video among a pool of deepfakes sharing the same origin is not widely investigated. While the pool task might be more rare in real-life compared to the binary one, outcomes that derives from these analyses might let us better understand deepfake behaviours, benefiting binary deep fake detection as well. In this paper, we address the less investigated scenario by investigating the role of Photo Response Non-Uniformity (PRNU) in deepfake detection. Our analysis, in agreement with prior studies, shows that PRNU can be a valuable source to identify deepfake videos. In particular, we found that unique PRNU characteristics exist to distinguish real videos from their deepfake versions: real video autocorrelations tend to be lower compared to their deepfakes versions. Motivated by this, we propose PRaNA, a training-free strategy that leverages PRNU autocorrelation. Our results on three well-known datasets confirm our algorithm's robustness and transferability, with accuracy up to 66% when considering one real video in a pool of four deepfakes using the real video as a source, and up to 80% when only one deepfake is considered. Our work aims to open different strategies to counter deepfake diffusion.
@en