The rise of AI-generated synthetic media, or deepfakes, has introduced unprecedented opportunities and challenges across various fields, including entertainment, cybersecurity, and digital communication. Using advanced frameworks such as Generative Adversarial Networks (GANs) and Diffusion Models (DMs), deepfakes are capable of producing highly realistic yet fabricated content, while these advancements enable creative and innovative applications, they also pose severe ethical, social, and security risks due to their potential misuse. The proliferation of deepfakes has triggered phenomena like “Impostor Bias”, a growing skepticism toward the authenticity of multimedia content, further complicating trust in digital interactions. This paper is mainly based on the description of a research project called FF4ALL (FF4ALL-Detection of Deep Fake Media and Life-Long Media Authentication) for the detection and authentication of deepfakes, focusing on areas such as forensic attribution, passive a...

The rise of AI-generated synthetic media, or deepfakes, has introduced unprecedented opportunities and challenges across various fields, including entertainment, cybersecurity, and digital communication. Using advanced frameworks such as Generative Adversarial Networks (GANs) and Diffusion Models (DMs), deepfakes are capable of producing highly realistic yet fabricated content, while these advancements enable creative and innovative applications, they also pose severe ethical, social, and security risks due to their potential misuse. The proliferation of deepfakes has triggered phenomena like “Impostor Bias”, a growing skepticism toward the authenticity of multimedia content, further complicating trust in digital interactions. This paper is mainly based on the description of a research project called FF4ALL (FF4ALL-Detection of Deep Fake Media and Life-Long Media Authentication) for the detection and authentication of deepfakes, focusing on areas such as forensic attribution, passive and active authentication, and detection in real-world scenarios. By exploring both the strengths and limitations of current methodologies, we highlight critical research gaps and propose directions for future advancements to ensure media integrity and trustworthiness in an era increasingly dominated by synthetic media.

Deepfake Media Forensics: Status and Future Challenges / Amerini, I.; Barni, M.; Battiato, S.; Bestagini, P.; Boato, G.; Bruni, V.; Caldelli, R.; De Natale, F.; De Nicola, R.; Guarnera, L.; Mandelli, S.; Majid, T.; Marcialis, G. L.; Micheletto, M.; Montibeller, A.; Orru, G.; Ortis, A.; Perazzo, P.; Puglisi, G.; Purnekar, N.; Salvi, D.; Tubaro, S.; Villari, M.; Vitulano, D.. - In: JOURNAL OF IMAGING. - ISSN 2313-433X. - 11:3(2025). [10.3390/jimaging11030073]

Deepfake Media Forensics: Status and Future Challenges

Battiato S.;Boato G.;De Natale F.;Montibeller A.;
2025-01-01

Abstract

The rise of AI-generated synthetic media, or deepfakes, has introduced unprecedented opportunities and challenges across various fields, including entertainment, cybersecurity, and digital communication. Using advanced frameworks such as Generative Adversarial Networks (GANs) and Diffusion Models (DMs), deepfakes are capable of producing highly realistic yet fabricated content, while these advancements enable creative and innovative applications, they also pose severe ethical, social, and security risks due to their potential misuse. The proliferation of deepfakes has triggered phenomena like “Impostor Bias”, a growing skepticism toward the authenticity of multimedia content, further complicating trust in digital interactions. This paper is mainly based on the description of a research project called FF4ALL (FF4ALL-Detection of Deep Fake Media and Life-Long Media Authentication) for the detection and authentication of deepfakes, focusing on areas such as forensic attribution, passive a...
2025
3
Amerini, I.; Barni, M.; Battiato, S.; Bestagini, P.; Boato, G.; Bruni, V.; Caldelli, R.; De Natale, F.; De Nicola, R.; Guarnera, L.; Mandelli, S.; Maj...espandi
Deepfake Media Forensics: Status and Future Challenges / Amerini, I.; Barni, M.; Battiato, S.; Bestagini, P.; Boato, G.; Bruni, V.; Caldelli, R.; De Natale, F.; De Nicola, R.; Guarnera, L.; Mandelli, S.; Majid, T.; Marcialis, G. L.; Micheletto, M.; Montibeller, A.; Orru, G.; Ortis, A.; Perazzo, P.; Puglisi, G.; Purnekar, N.; Salvi, D.; Tubaro, S.; Villari, M.; Vitulano, D.. - In: JOURNAL OF IMAGING. - ISSN 2313-433X. - 11:3(2025). [10.3390/jimaging11030073]
File in questo prodotto:
File Dimensione Formato  
jimaging-11-00073 (2).pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 10.35 MB
Formato Adobe PDF
10.35 MB Adobe PDF Visualizza/Apri
jimaging-11-00073+(2)_compressed.pdf

accesso aperto

Descrizione: Versione editoriale compressed
Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 975.75 kB
Formato Adobe PDF
975.75 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/453850
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 17
  • ???jsp.display-item.citation.isi??? 13
  • OpenAlex ND
social impact