Talk
LFVS-Mamba: State-Space Model for Light Field View Synthesis
Muhammad Zubair (Zubair, M.); Paulo Nunes (Nunes, P.); Caroline Conti (Conti, C.); Luís Ducla Soares (Soares, L. D.);
Event Title
2025 International Conference on Visual Communications and Image Processing (VCIP)
Year (definitive publication)
2025
Language
English
Country
Austria
More Information
Web of Science®

This publication is not indexed in Web of Science®

Scopus

This publication is not indexed in Scopus

Google Scholar

This publication is not indexed in Google Scholar

This publication is not indexed in Overton

Abstract
Light Field View Synthesis (LFVS) methods using Convolutional Neural Networks (CNNs) and Vision Transformers (VTs) have been extensively studied: CNNs excel at learning local spatial features via hierarchical receptive fields but cannot capture long-range global dependencies, while VTs inherently model global context through self-attention at the cost of quadratic computation and memory complexity. To address these issues, we propose LFVS-Mamba, which integrates a State-Space Module (SSM) with a Selective Scanning Mechanism to efficiently capture long-range dependencies. LFVS-Mamba processes 2D slices of the 4D LF to fully exploit spatial context, complementary angular information, and depth cues. The LFVS-Mamba comprises three modules to progressively synthesize dense LFs: (i) Shallow Feature Extraction (SFE), (ii) Spatial-Angular Depth Feature Extraction (SADFE), and (iii) Angular Upsampling (AU). Experimental results on standard LF benchmarks demonstrate that LFVS-Mamba consistently outperforms existing methods.
Acknowledgements
--
Keywords
light field,view synthesis,angular consistency,state space model,cross-scanning
Funding Records
Funding Reference Funding Entity
UID/50008/2025 –Instituto de Telecomunicações FCT/MECI
Associated Records

This talk is associated with the following record: