Ciência_Iscte
Comunicações
Descrição Detalhada da Comunicação
Light Field View Synthesis Using Deformable Convolutional Neural Networks
Título Evento
2024 Picture Coding Symposium (PCS)
Ano (publicação definitiva)
2024
Língua
Inglês
País
Taiwan
Mais Informação
Web of Science®
Esta publicação não está indexada na Web of Science®
Scopus
Esta publicação não está indexada na Scopus
Google Scholar
Esta publicação não está indexada no Google Scholar
Abstract/Resumo
Light Field (LF) imaging has emerged as a technology that can simultaneously capture both intensity values and directions of light rays from real-world scenes. Densely sampled
LFs are drawing increased attention for their wide application in 3D reconstruction, depth estimation, and digital refocusing. In order to synthesize additional views to obtain a LF with higher
angular resolution, many learning-based methods have been proposed. This paper follows a similar approach to Liu et al. [1] but using deformable convolutions to improve the view synthesis
performance and depth-wise separable convolutions to reduce the amount of model parameters. The proposed framework consists of two main modules: i) a multi-representation view synthesis
module to extract features from different LF representations of the sparse LF, and ii) a geometry-aware refinement module to synthesize a dense LF by exploring the structural characteristics
of the corresponding sparse LF. Experimental results over various benchmarks demonstrate the superiority of the proposed method when compared to state-of-the-art ones. The code is available at https://github.com/MSP-IUL/deformable lfvs.
Agradecimentos/Acknowledgements
--
Palavras-chave
light field view synthesis,deformable convolution,depth-wise separable convolution,geometry-aware network
Registos de financiamentos
Referência de financiamento | Entidade Financiadora |
---|---|
UIDB/50008/2020 | Fundação para a Ciência e a Tecnologia |
PTDC/EEICOM/ 7096/2020 | Fundação para a Ciência e a Tecnologia |