Scientific journal paper Q1
Detection of forged images using a combination of passive methods based on neural networks
Ancilon Leuch Alencar (Alencar, A. L.); Marcelo Dornbusch Lopes (Lopes, M. D.); Anita Maria da Rocha Fernandes (Fernandes, A. M. da R.); Julio Cesar Santos dos Anjos (Anjos, J. C. S. dos.); Juan Francisco De Paz Santana (De Paz Santana, J. F.); Valderi Leithardt (Leithardt, V. R. Q.);
Journal Title
Future Internet
Year (definitive publication)
2024
Language
English
Country
Switzerland
More Information
Web of Science®

Times Cited: 0

(Last checked: 2024-11-17 21:12)

View record in Web of Science®

Scopus

Times Cited: 1

(Last checked: 2024-11-16 08:48)

View record in Scopus

Google Scholar

Times Cited: 2

(Last checked: 2024-11-18 15:12)

View record in Google Scholar

Abstract
In the current era of social media, the proliferation of images sourced from unreliable origins underscores the pressing need for robust methods to detect forged content, particularly amidst the rapid evolution of image manipulation technologies. Existing literature delineates two primary approaches to image manipulation detection: active and passive. Active techniques intervene preemptively, embedding structures into images to facilitate subsequent authenticity verification, whereas passive methods analyze image content for traces of manipulation. This study presents a novel solution to image manipulation detection by leveraging a multi-stream neural network architecture. Our approach harnesses three convolutional neural networks (CNNs) operating on distinct data streams extracted from the original image. We have developed a solution based on two passive detection methodologies. The system utilizes two separate streams to extract specific data subsets, while a third stream processes the unaltered image. Each net independently processes its respective data stream, capturing diverse facets of the image. The outputs from these nets are then fused through concatenation to ascertain whether the image has undergone manipulation, yielding a comprehensive detection framework surpassing the efficacy of its constituent methods. Our work introduces a unique dataset derived from the fusion of four publicly available datasets, featuring organically manipulated images that closely resemble real-world scenarios. This dataset offers a more authentic representation than other state-of-the-art methods that use algorithmically generated datasets based on image patches. By encompassing genuine manipulation scenarios, our dataset enhances the model’s ability to generalize across varied manipulation techniques, thereby improving its performance in real-world settings. After training, the merged approach obtained an accuracy of 89.59% in the set of validation images, significantly higher than the model trained with only unaltered images, which obtained 78.64%, and the two other models trained using images with a feature selection method applied to enhance inconsistencies that obtained 68.02% for Error-Level Analysis images and 50.70% for the method using Discrete Wavelet Transform. Moreover, our proposed approach exhibits reduced accuracy variance compared to alternative models, underscoring its stability and robustness across diverse datasets. The approach outlined in this work needs to provide information about the specific location or type of tempering, which limits its practical applications.
Acknowledgements
--
Keywords
Digital image forensics,Convolutional neural network,Deep learning
  • Computer and Information Sciences - Natural Sciences
Funding Records
Funding Reference Funding Entity
UIDB/00066/2020 Fundação para a Ciência e a Tecnologia
2020/09706-7 CAPES
UIDP/00066/2020 Fundação para a Ciência e a Tecnologia

With the objective to increase the research activity directed towards the achievement of the United Nations 2030 Sustainable Development Goals, the possibility of associating scientific publications with the Sustainable Development Goals is now available in Ciência-IUL. These are the Sustainable Development Goals identified by the author(s) for this publication. For more detailed information on the Sustainable Development Goals, click here.