Export Publication

The publication can be exported in the following formats: APA (American Psychological Association) reference format, IEEE (Institute of Electrical and Electronics Engineers) reference format, BibTeX and RIS.

Export Reference (APA)
Hamad, M., Conti, C., Nunes, P. & Soares, L. D. (2022). View-consistent 4D Light Field style transfer using neural networks and over-segmentation. In 2022 IEEE 14th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP). Nafplio: IEEE.
Export Reference (IEEE)
M. F. Hamad et al.,  "View-consistent 4D Light Field style transfer using neural networks and over-segmentation", in 2022 IEEE 14th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), Nafplio, IEEE, 2022
Export BibTeX
@inproceedings{hamad2022_1716023167089,
	author = "Hamad, M. and Conti, C. and Nunes, P. and Soares, L. D.",
	title = "View-consistent 4D Light Field style transfer using neural networks and over-segmentation",
	booktitle = "2022 IEEE 14th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP)",
	year = "2022",
	editor = "",
	volume = "",
	number = "",
	series = "",
	doi = "10.1109/IVMSP54334.2022.9816312",
	publisher = "IEEE",
	address = "Nafplio",
	organization = "",
	url = "https://ieeexplore.ieee.org/xpl/conhome/9816065/proceeding"
}
Export RIS
TY  - CPAPER
TI  - View-consistent 4D Light Field style transfer using neural networks and over-segmentation
T2  - 2022 IEEE 14th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP)
AU  - Hamad, M.
AU  - Conti, C.
AU  - Nunes, P.
AU  - Soares, L. D.
PY  - 2022
DO  - 10.1109/IVMSP54334.2022.9816312
CY  - Nafplio
UR  - https://ieeexplore.ieee.org/xpl/conhome/9816065/proceeding
AB  - Deep learning has shown promising results in several computer vision applications, such as style transfer applications. Style transfer aims at generating a new image by combining the content of one image with the style and color palette of another image. When applying style transfer to a 4D Light Field (LF) that represents the same scene from different angular perspectives, new challenges and requirements are involved. While the visually appealing quality of the stylized image is an important criterion in 2D images, cross-view consistency is essential in 4D LFs. Moreover, the need for large datasets to train new robust models arises as another challenge due to the limited LF datasets that are currently available. In this paper, a neural style transfer approach is used, along with a robust propagation based on over-segmentation, to stylize 4D LFs. Experimental results show that the proposed solution outperforms the state-of-the-art without any need for training or fine-tuning existing ones while maintaining consistency across LF views.
ER  -