Artigo em revista científica Q2
EcDiff-LLIE: Event-conditional diffusion model for structure-preserving low-light image enhancement
Ramna Maqsood (Maqsood, R.); Paulo Nunes (Nunes, P.); Luís Ducla Soares (Soares, L. D.); Caroline Conti (Conti, C.);
Título Revista
IEEE Open Journal of Signal Processing
Ano (publicação definitiva)
2026
Língua
Inglês
País
Estados Unidos da América
Mais Informação
Web of Science®

N.º de citações: 0

(Última verificação: 2026-03-04 10:14)

Ver o registo na Web of Science®

Scopus

N.º de citações: 0

(Última verificação: 2026-02-23 15:09)

Ver o registo na Scopus

Google Scholar

N.º de citações: 0

(Última verificação: 2026-03-05 10:22)

Ver o registo no Google Scholar

Esta publicação não está indexada no Overton

Abstract/Resumo
Low-light image enhancement (LLIE) aims to restore the visual quality of poorly illuminated images by recovering fine details and textures while suppressing noise and artifacts. Recently, diffusion models have shown superior generative capabilities for LLIE. However, existing diffusion-based methods condition the denoising process only on low-light images or features derived from them (e.g., structural or illumination maps). Since the low-light images are severely degraded, this limits the denoising model’s ability to restore fine structure and reduce artifacts. In this work, we show that the event data captured simultaneously with the low-light images provides complementary high-dynamic-range and high-temporal-resolution structural information that can overcome this limitation. Therefore, we propose EcDiff-LLIE, a novel event-conditional diffusion framework for LLIE. At its core, we introduce a multimodality denoising network that conditions on both low-light images and concurrent event streams. To effectively fuse the two modalities, we design a cross-modality attention block that bridge their domain differences, while also enabling long-range dependency modeling for improved structural preservation. Experiments on the synthetic SDSD and real-world SDE datasets show significant improvements in quantitative evaluation metrics. Furthermore, evaluation on the high-resolution real-world HUE dataset further shows the generalization ability of the proposed framework.
Agradecimentos/Acknowledgements
This work was supported in part by the National funds through FCT – Fundação para a Ciência e a Tecnologia, I.P. and in part by EU funds through Project/support UID/50008/2025 –Instituto de Telecomunicações, with DOI identifier 10.54499/UID/50008/2025.
Palavras-chave
Low-light image enhancement,Event camera,Diffusion model,Cross-modality self- attention
  • Ciências da Computação e da Informação - Ciências Naturais
  • Engenharia Eletrotécnica, Eletrónica e Informática - Engenharia e Tecnologia
Registos de financiamentos
Referência de financiamento Entidade Financiadora
UID/50008/2025 Fundação para a Ciência e a Tecnologia