Exportar Publicação

A publicação pode ser exportada nos seguintes formatos: referência da APA (American Psychological Association), referência do IEEE (Institute of Electrical and Electronics Engineers), BibTeX e RIS.

Exportar Referência (APA)
Jardim, D., Nunes, L. & Dias, J. (2015). Human activity recognition and prediction. In Maria De Marsico, Mário Figueiredo, Ana Fred (Ed.), ICPRAM 2015: Proceedings of the International Conference on Pattern Recognition Applications and Methods. (pp. 24-32). Lisboa: SCITEPRESS.
Exportar Referência (IEEE)
D. W. Jardim et al.,  "Human activity recognition and prediction", in ICPRAM 2015: Proc. of the Int. Conf. on Pattern Recognition Applications and Methods, Maria De Marsico, Mário Figueiredo, Ana Fred, Ed., Lisboa, SCITEPRESS, 2015, pp. 24-32
Exportar BibTeX
@inproceedings{jardim2015_1711657311457,
	author = "Jardim, D. and Nunes, L. and Dias, J.",
	title = "Human activity recognition and prediction",
	booktitle = "ICPRAM 2015: Proceedings of the International Conference on Pattern Recognition Applications and Methods",
	year = "2015",
	editor = "Maria De Marsico, Mário Figueiredo, Ana Fred",
	volume = "",
	number = "",
	series = "",
	doi = "10.5220/0005327200240032",
	pages = "24-32",
	publisher = "SCITEPRESS",
	address = "Lisboa",
	organization = "Institute for Systems and Technologies of Information, Control and Communication (INSTICC) ",
	url = "https://icpram.scitevents.org/DoctoralConsortium.aspx?y=2015"
}
Exportar RIS
TY  - CPAPER
TI  - Human activity recognition and prediction
T2  - ICPRAM 2015: Proceedings of the International Conference on Pattern Recognition Applications and Methods
AU  - Jardim, D.
AU  - Nunes, L.
AU  - Dias, J.
PY  - 2015
SP  - 24-32
SN  - 2184-4313
DO  - 10.5220/0005327200240032
CY  - Lisboa
UR  - https://icpram.scitevents.org/DoctoralConsortium.aspx?y=2015
AB  - Human activity recognition (HAR) has become one of the most active research topics in image processing and pattern recognition (Aggarwal, J. K. and Ryoo, M. S., 2011). Detecting specific activities in a live feed or searching in video archives still relies almost completely on human resources. Detecting multiple activities in real-time video feeds is currently performed by assigning multiple analysts to simultaneously watch the same video stream. Manual analysis of video is labor intensive, fatiguing, and error prone. Solving the problem of recognizing human activities from video can lead to improvements in several applications fields like in surveillance systems, human computer interfaces, sports video analysis, digital shopping assistants, video retrieval, gaming and health-care (Popa et al., n.d.; Niu, W. et al., n.d.; Intille, S. S., 1999; Keller, C. G., 2011). This area has grown dramatically in the past 10 years, and throughout our research we identified a potentially underexplored sub-area: Action Prediction. What if we could infer the future actions of people from visual input? We propose to expand the current vision-based activity analysis to a level where it is possible to predict the future actions executed by a subject. We are interested in interactions which can involve a single actor, two humans and/or simple objects. For example try to predict if “a person will cross the street” or “a person will try to steal a handbag from another” or where will a tennis-player target the next volley. Using a hierarchical approach we intend to represent high-level human activities that are composed of other simpler activities, which are usually called sub-events which may themselves be decomposable. We expect to develop a system capable of predicting the next action in a sequence initially using offline-learning to bootstrap the system and then with self-improvement/task specialization in mind, using online-learning.

ER  -