Light Field View Synthesis Using Deformable Convolutional Neural Networks
Event Title
2024 Picture Coding Symposium (PCS)
Year (definitive publication)
2024
Language
English
Country
Taiwan
More Information
Web of Science®
This publication is not indexed in Web of Science®
Scopus
This publication is not indexed in Scopus
Google Scholar
This publication is not indexed in Google Scholar
Abstract
Light Field (LF) imaging has emerged as a technology that can simultaneously capture both intensity values and directions of light rays from real-world scenes. Densely sampled
LFs are drawing increased attention for their wide application in 3D reconstruction, depth estimation, and digital refocusing. In order to synthesize additional views to obtain a LF with higher
angular resolution, many learning-based methods have been proposed. This paper follows a similar approach to Liu et al. [1] but using deformable convolutions to improve the view synthesis
performance and depth-wise separable convolutions to reduce the amount of model parameters. The proposed framework consists of two main modules: i) a multi-representation view synthesis
module to extract features from different LF representations of the sparse LF, and ii) a geometry-aware refinement module to synthesize a dense LF by exploring the structural characteristics
of the corresponding sparse LF. Experimental results over various benchmarks demonstrate the superiority of the proposed method when compared to state-of-the-art ones. The code is available at https://github.com/MSP-IUL/deformable lfvs.
Acknowledgements
--
Keywords
light field view synthesis,deformable convolution,depth-wise separable convolution,geometry-aware network
Funding Records
Funding Reference | Funding Entity |
---|---|
UIDB/50008/2020 | Fundação para a Ciência e a Tecnologia |
PTDC/EEICOM/ 7096/2020 | Fundação para a Ciência e a Tecnologia |