Ciência-IUL
Publicações
Descrição Detalhada da Publicação
Detection of forged images using a combination of passive methods based on neural networks
Título Revista
Future Internet
Ano (publicação definitiva)
2024
Língua
Inglês
País
Suíça
Mais Informação
Web of Science®
Scopus
Google Scholar
Abstract/Resumo
In the current era of social media, the proliferation of images sourced from unreliable origins underscores the pressing need for robust methods to detect forged content, particularly amidst the rapid evolution of image manipulation technologies. Existing literature delineates two primary approaches to image manipulation detection: active and passive. Active techniques intervene preemptively, embedding structures into images to facilitate subsequent authenticity verification, whereas passive methods analyze image content for traces of manipulation. This study presents a novel solution to image manipulation detection by leveraging a multi-stream neural network architecture. Our approach harnesses three convolutional neural networks (CNNs) operating on distinct data streams extracted from the original image. We have developed a solution based on two passive detection methodologies. The system utilizes two separate streams to extract specific data subsets, while a third stream processes the unaltered image. Each net independently processes its respective data stream, capturing diverse facets of the image. The outputs from these nets are then fused through concatenation to ascertain whether the image has undergone manipulation, yielding a comprehensive detection framework surpassing the efficacy of its constituent methods. Our work introduces a unique dataset derived from the fusion of four publicly available datasets, featuring organically manipulated images that closely resemble real-world scenarios. This dataset offers a more authentic representation than other state-of-the-art methods that use algorithmically generated datasets based on image patches. By encompassing genuine manipulation scenarios, our dataset enhances the model’s ability to generalize across varied manipulation techniques, thereby improving its performance in real-world settings. After training, the merged approach obtained an accuracy of 89.59% in the set of validation images, significantly higher than the model trained with only unaltered images, which obtained 78.64%, and the two other models trained using images with a feature selection method applied to enhance inconsistencies that obtained 68.02% for Error-Level Analysis images and 50.70% for the method using Discrete Wavelet Transform. Moreover, our proposed approach exhibits reduced accuracy variance compared to alternative models, underscoring its stability and robustness across diverse datasets. The approach outlined in this work needs to provide information about the specific location or type of tempering, which limits its practical applications.
Agradecimentos/Acknowledgements
--
Palavras-chave
Digital image forensics,Convolutional neural network,Deep learning
Classificação Fields of Science and Technology
- Ciências da Computação e da Informação - Ciências Naturais
Registos de financiamentos
Referência de financiamento | Entidade Financiadora |
---|---|
UIDB/00066/2020 | Fundação para a Ciência e a Tecnologia |
2020/09706-7 | CAPES |
UIDP/00066/2020 | Fundação para a Ciência e a Tecnologia |
Contribuições para os Objetivos do Desenvolvimento Sustentável das Nações Unidas
Com o objetivo de aumentar a investigação direcionada para o cumprimento dos Objetivos do Desenvolvimento Sustentável para 2030 das Nações Unidas, é disponibilizada no Ciência-IUL a possibilidade de associação, quando aplicável, dos artigos científicos aos Objetivos do Desenvolvimento Sustentável. Estes são os Objetivos do Desenvolvimento Sustentável identificados pelo(s) autor(es) para esta publicação. Para uma informação detalhada dos Objetivos do Desenvolvimento Sustentável, clique aqui.