Exportar Publicação

A publicação pode ser exportada nos seguintes formatos: referência da APA (American Psychological Association), referência do IEEE (Institute of Electrical and Electronics Engineers), BibTeX e RIS.

Exportar Referência (APA)
Raposo, F., De Matos, D. & Ribeiro, R. (2021). Assessing kinetic meaning of music and dance via deep cross-modal retrieval. Neural Computing and Applications. 33 (21), 14481-14493
Exportar Referência (IEEE)
F. A. Raposo et al.,  "Assessing kinetic meaning of music and dance via deep cross-modal retrieval", in Neural Computing and Applications, vol. 33, no. 21, pp. 14481-14493, 2021
Exportar BibTeX
@article{raposo2021_1732208748226,
	author = "Raposo, F. and De Matos, D. and Ribeiro, R.",
	title = "Assessing kinetic meaning of music and dance via deep cross-modal retrieval",
	journal = "Neural Computing and Applications",
	year = "2021",
	volume = "33",
	number = "21",
	doi = "10.1007/s00521-021-06090-8",
	pages = "14481-14493",
	url = "https://www.springer.com/journal/521"
}
Exportar RIS
TY  - JOUR
TI  - Assessing kinetic meaning of music and dance via deep cross-modal retrieval
T2  - Neural Computing and Applications
VL  - 33
IS  - 21
AU  - Raposo, F.
AU  - De Matos, D.
AU  - Ribeiro, R.
PY  - 2021
SP  - 14481-14493
SN  - 0941-0643
DO  - 10.1007/s00521-021-06090-8
UR  - https://www.springer.com/journal/521
AB  - Music semantics is embodied, in the sense that meaning is biologically mediated by and grounded in the human body and brain. This embodied cognition perspective also explains why music structures modulate kinetic and somatosensory perception. We explore this aspect of cognition, by considering dance as an overt expression of semantic aspects of music related to motor intention, in an artificial deep recurrent neural network that learns correlations between music audio and dance video. We claim that, just like human semantic cognition is based on multimodal statistical structures, joint statistical modeling of music and dance artifacts is expected to capture semantics of these modalities. We evaluate the ability of this model to effectively capture underlying semantics in a cross-modal retrieval task, including dance styles in an unsupervised fashion. Quantitative results, validated with statistical significance testing, strengthen the body of evidence for embodied cognition in music and demonstrate the model can recommend music audio for dance video queries and vice versa.
ER  -