LIMESA
Light Field Processing for Immersive Media Streaming Applications
Description

1) Development of Enhanced Light Field Representation Solutions – To enable real-time streaming of Light Field (LF) content, flexible LF coded representations will be investigated, aiming to manage the massive amount of data involved and to predict the user’s movement in a fully immersive experience. For this purpose, scalable LF coding solutions will be developed aiming at supporting random access and region-of-interest (ROI) coding with high coding efficiency.

2) Development of Light Field Processing Tools – The different LF capturing approaches have different spatio-angular tradeoffs and may suffer from low spatial resolution, limited depth-of-field, or high computational complexity. To overcome such limitations, advanced algorithms that can estimate accurate geometry information, create 3D models from LFs, and synthesize spatial/angular super-resolved images with high quality and efficiency are needed. To this aim, efficient LF geometry estimation and virtual view synthesis algorithms beyond conventional multi-view approaches will be investigated. Tools like segmentation and inpainting, that may especially useful for interactive LF editing, will also be considered.

3) Development of Efficient Packaging Solutions for Light Field Streaming – Ultra-realistic scene rendering from LFs is a very appealing functionality for future interactive and immersive streaming services. One reason for this is the decoupling of computational cost of scene rendering from the rendered scene complexity, contrary to what happens in computer-generated 3D scenes. However, LF imaging requires a huge amount of data for proper scene rendering. To enable interactive LF rendering without requiring the whole LF to be available at the receiver, efficient packaging of the encoded LF content is needed. This would allow restricting network delivery to only the subset of the LF image that is needed to reconstruct the required view. For this to be done in an efficient way, adequate prediction mechanisms for view switching must be investigated. A possible starting point is to convert the LF into a pseudo-video sequence and segment it using the MPEG-DASH approach.

Funding: FCT;  Reference: PTDC/EEI-COM/7096/2020,

Internal Partners
Research Centre Research Group Role in Project Begin Date End Date
IT-Iscte -- Partner 2021-03-01 2024-02-29
External Partners
Institution Country Role in Project Begin Date End Date
Instituto de Telecomunicações (IT) Portugal Leader 2021-03-01 2024-02-29
Project Team
Name Affiliation Role in Project Begin Date End Date
Caroline Conti Professora Auxiliar (DCTI); Associate Researcher (IT-Iscte); Local Coordinator 2021-03-01 2024-02-29
Luís Ducla Soares Professor Associado (com Agregação) (DCTI); Integrated Researcher (IT-Iscte); Principal Researcher 2021-03-01 2024-02-29
Paulo Jorge Lourenço Nunes Professor Associado (DCTI); Integrated Researcher (IT-Iscte); Researcher 2021-03-01 2024-02-29
Project Fundings
Reference/Code Funding DOI Funding Type Funding Program Funding Amount (Global) Funding Amount (Local) Begin Date End Date
PTDC/EEI-COM/7096/2020 -- Contract FCT/MCTS - -- - Portugal 160285 160285 2021-03-01 2024-02-29
Related Research Data Records

No records found.

Related References in the Media

No records found.

Other Outputs

No records found.

Project Files

No records found.

Light Field Processing for Immersive Media Streaming Applications
2021-03-01
2024-02-29