Exportar Publicação
A publicação pode ser exportada nos seguintes formatos: referência da APA (American Psychological Association), referência do IEEE (Institute of Electrical and Electronics Engineers), BibTeX e RIS.
Jardim, D., Nunes, L. & Dias, M. (2016). Impact of automated action labeling in classification of human actions in RGB-D videos. In Van Harmelen, F., Dignum, V., Dignum, F., Bouquet, P., Fox, M., Kaminka, G. A., and Hüllermeier, E. (Ed.), ECAI 2016: 22nd European Conference on Artificial Intelligence. (pp. 1632-1633). The Hage: IOS Press .
D. W. Jardim et al., "Impact of automated action labeling in classification of human actions in RGB-D videos", in ECAI 2016: 22nd European Conf. on Artificial Intelligence, Van Harmelen, F., Dignum, V., Dignum, F., Bouquet, P., Fox, M., Kaminka, G. A., and Hüllermeier, E., Ed., The Hage, IOS Press , 2016, vol. 285, pp. 1632-1633
@inproceedings{jardim2016_1734635576429, author = "Jardim, D. and Nunes, L. and Dias, M.", title = "Impact of automated action labeling in classification of human actions in RGB-D videos", booktitle = "ECAI 2016: 22nd European Conference on Artificial Intelligence", year = "2016", editor = "Van Harmelen, F., Dignum, V., Dignum, F., Bouquet, P., Fox, M., Kaminka, G. A., and Hüllermeier, E.", volume = "285", number = "", series = "", doi = "10.3233/978-1-61499-672-9-1632", pages = "1632-1633", publisher = "IOS Press ", address = "The Hage", organization = "European Association for Artificial Intelligence", url = "http://www.scopus.com/inward/record.url?eid=2-s2.0-85013073342&partnerID=MN8TOARS" }
TY - CPAPER TI - Impact of automated action labeling in classification of human actions in RGB-D videos T2 - ECAI 2016: 22nd European Conference on Artificial Intelligence VL - 285 AU - Jardim, D. AU - Nunes, L. AU - Dias, M. PY - 2016 SP - 1632-1633 DO - 10.3233/978-1-61499-672-9-1632 CY - The Hage UR - http://www.scopus.com/inward/record.url?eid=2-s2.0-85013073342&partnerID=MN8TOARS AB - For many applications it is important to be able to detect what a human is currently doing. This ability is useful for applications such as surveillance, human computer interfaces, games and healthcare. In order to recognize a human action, the typical approach is to use manually labeled data to perform supervised training. This paper aims to compare the performance of several supervised classifiers trained with manually labeled data versus the same classifiers trained with data automatically labeled. In this paper we propose a framework capable of recognizing human actions using supervised classifiers trained with automatically labeled data in RGB-D videos. ER -