For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
Forensic expertise area: Forensic Data Science, Digital Forensics

Short description

With generative AI models becoming better at generating high-fidelity media, reliable and robust detection of deepfakes is becoming more important. Among the many strategies explored in the literature, remote photoplethysmography (rPPG)-based detection methods represent a promising direction. These methods try to estimate the heart rate (blood volume pulse signal) of the person in the video and use this as a feature for deepfake detection, based on the idea that deepfake generators cannot accurately recreate natural heart rate. Studies have shown that this method can detect deepfakes with high accuracy [1, 2, 3], but research by previous interns has also shown that the accuracy of the estimated heart rate is very dependent on various factors including environmental conditions, camera specifications and natural variability between participants. As a result, it is not yet entirely clear when this method is applicable in practice. In addition, we would like to know to what extent the original heart rate signal is still present in manipulated deepfake videos, such as face swaps.

The goal of this project will therefore be to investigate the influence of various factors such as environmental lighting, movement, type of camera and compression level on the accuracy of the rPPG estimates, to get an idea of the applicability of the method in various conditions that might be relevant in case material. To this end you will:

  • Create a dataset of ground-truth heart rate measurements of multiple participants in various conditions.
  • Compare different rPPG extraction algorithms using this dataset to evaluate which one provides the most accurate estimations in which conditions.
  • Investigate to what extent estimated heart rate signals in face swap deepfake videos still correspond to the original ground-truth heart rate measurements.

References

1) Ciftci, U. A., Demir, I., & Yin, L. (2020). FakeCatcher: Detection of synthetic portrait videos using biological signals. IEEE Transactions on Pattern Analysis and Machine Intelligence, PP, 1–1. doi:10.1109/TPAMI.2020.3009287

2) Wu, J., Zhu, Y., Jiang, X., Liu, Y., & Lin, J. (2023). Local attention and long-distance interaction of rPPG for deepfake detection. The Visual Computer, 1–12. doi:10.1007/s00371-023-02833-x

3) An, B. S., Lim, H., Seong, H. A., & Lee, E. C. (2024). Facial and neck region analysis for deepfake detection using remote photoplethysmography signal similarity. IET Biometrics, 2024(1). doi:10.1049/bme2/7095412

Required/Recommended skills and expertise

- Interest in / affinity with digital (video) forensics
- Python programming skills

Information

Institute / Company: Nederlands Forensisch Instituut (NFI)
Supervisor: Stijn van Lierop
UVA Examiner: Zeno Geradts
UVA Coordinator: Arian van Asten & Yorike Hartman

Date of publication: September 16, 2025