Scientific journal paper Q2
EcDiff-LLIE: Event-conditional diffusion model for structure-preserving low-light image enhancement
Ramna Maqsood (Maqsood, R.); Paulo Nunes (Nunes, P.); Luís Ducla Soares (Soares, L. D.); Caroline Conti (Conti, C.);
Journal Title
IEEE Open Journal of Signal Processing
Year (definitive publication)
2026
Language
English
Country
United States of America
More Information
Web of Science®

Times Cited: 0

(Last checked: 2026-03-04 10:14)

View record in Web of Science®

Scopus

Times Cited: 0

(Last checked: 2026-02-23 15:09)

View record in Scopus

Google Scholar

Times Cited: 0

(Last checked: 2026-03-05 10:22)

View record in Google Scholar

This publication is not indexed in Overton

Abstract
Low-light image enhancement (LLIE) aims to restore the visual quality of poorly illuminated images by recovering fine details and textures while suppressing noise and artifacts. Recently, diffusion models have shown superior generative capabilities for LLIE. However, existing diffusion-based methods condition the denoising process only on low-light images or features derived from them (e.g., structural or illumination maps). Since the low-light images are severely degraded, this limits the denoising model’s ability to restore fine structure and reduce artifacts. In this work, we show that the event data captured simultaneously with the low-light images provides complementary high-dynamic-range and high-temporal-resolution structural information that can overcome this limitation. Therefore, we propose EcDiff-LLIE, a novel event-conditional diffusion framework for LLIE. At its core, we introduce a multimodality denoising network that conditions on both low-light images and concurrent event streams. To effectively fuse the two modalities, we design a cross-modality attention block that bridge their domain differences, while also enabling long-range dependency modeling for improved structural preservation. Experiments on the synthetic SDSD and real-world SDE datasets show significant improvements in quantitative evaluation metrics. Furthermore, evaluation on the high-resolution real-world HUE dataset further shows the generalization ability of the proposed framework.
Acknowledgements
This work was supported in part by the National funds through FCT – Fundação para a Ciência e a Tecnologia, I.P. and in part by EU funds through Project/support UID/50008/2025 –Instituto de Telecomunicações, with DOI identifier 10.54499/UID/50008/2025.
Keywords
Low-light image enhancement,Event camera,Diffusion model,Cross-modality self- attention
  • Computer and Information Sciences - Natural Sciences
  • Electrical Engineering, Electronic Engineering, Information Engineering - Engineering and Technology
Funding Records
Funding Reference Funding Entity
UID/50008/2025 Fundação para a Ciência e a Tecnologia