Exportar Publicação
A publicação pode ser exportada nos seguintes formatos: referência da APA (American Psychological Association), referência do IEEE (Institute of Electrical and Electronics Engineers), BibTeX e RIS.
Jardim, D., Nunes, L. & Dias, M. (2016). Human activity recognition from automatically labeled data in RGB-D videos. In 2016 8th Computer Science and Electronic Engineering (CEEC). (pp. 89-94). Colchester, UK: IEEE.
D. W. Jardim et al., "Human activity recognition from automatically labeled data in RGB-D videos", in 2016 8th Computer Science and Electronic Engineering (CEEC), Colchester, UK, IEEE, 2016, pp. 89-94
@inproceedings{jardim2016_1729033444981, author = "Jardim, D. and Nunes, L. and Dias, M.", title = "Human activity recognition from automatically labeled data in RGB-D videos", booktitle = "2016 8th Computer Science and Electronic Engineering (CEEC)", year = "2016", editor = "", volume = "", number = "", series = "", doi = "10.1109/CEEC.2016.7835894", pages = "89-94", publisher = "IEEE", address = "Colchester, UK", organization = "IEEE, University of Essex", url = "https://ieeexplore.ieee.org/xpl/conhome/7827233/proceeding" }
TY - CPAPER TI - Human activity recognition from automatically labeled data in RGB-D videos T2 - 2016 8th Computer Science and Electronic Engineering (CEEC) AU - Jardim, D. AU - Nunes, L. AU - Dias, M. PY - 2016 SP - 89-94 DO - 10.1109/CEEC.2016.7835894 CY - Colchester, UK UR - https://ieeexplore.ieee.org/xpl/conhome/7827233/proceeding AB - Human Activity Recognition (HAR) is an interdisciplinary research area that has been attracting interest from several research communities specialized in machine learning, computer vision, medical and gaming research. The potential applications range from surveillance systems, human computer interfaces, sports video analysis, digital shopping assistants, video retrieval, games and health-care. Several and diverse approaches exist to recognize a human action. From computer vision techniques, modeling relations between human motion and objects, marker-based tracking systems and RGB-D cameras. Using a Kinect sensor that provides the position of the main skeleton joints we extract features based solely on the motion of those joints. This paper aims to compare the performance of several supervised classifiers trained with manually labeled data versus the same classifiers trained with data automatically labeled. We propose a framework capable of recognizing human actions using supervised classifiers trained with automatically labeled data. ER -