Exportar Publicação
A publicação pode ser exportada nos seguintes formatos: referência da APA (American Psychological Association), referência do IEEE (Institute of Electrical and Electronics Engineers), BibTeX e RIS.
Freitas, J., Teixeira, A. & Dias, J. (2014). Multimodal corpora for silent speech interaction. In Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk and Stelios Piperidis (Ed.), Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2014). (pp. 4507-4511). Reykjavik: European Language Resources Association (ELRA).
J. Freitas et al., "Multimodal corpora for silent speech interaction", in Proc. of the Ninth Int. Conf. on Language Resources and Evaluation (LREC 2014), Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk and Stelios Piperidis, Ed., Reykjavik, European Language Resources Association (ELRA), 2014, pp. 4507-4511
@inproceedings{freitas2014_1732211672152, author = "Freitas, J. and Teixeira, A. and Dias, J.", title = "Multimodal corpora for silent speech interaction", booktitle = "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2014)", year = "2014", editor = "Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk and Stelios Piperidis", volume = "", number = "", series = "", pages = "4507-4511", publisher = "European Language Resources Association (ELRA)", address = "Reykjavik", organization = "European Language Resources Association", url = "https://www.aclweb.org/anthology/L14-1243/" }
TY - CPAPER TI - Multimodal corpora for silent speech interaction T2 - Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2014) AU - Freitas, J. AU - Teixeira, A. AU - Dias, J. PY - 2014 SP - 4507-4511 CY - Reykjavik UR - https://www.aclweb.org/anthology/L14-1243/ AB - A Silent Speech Interface (SSI) allows for speech communication to take place in the absence of an acoustic signal. This type of interface is an alternative to conventional Automatic Speech Recognition which is not adequate for users with some speech impairments or in the presence of environmental noise. The work presented here produces the conditions to explore and analyze complex combinations of input modalities applicable in SSI research. By exploring non-invasive and promising modalities, we have selected the following sensing technologies used in human-computer interaction: Video and Depth input, Ultrasonic Doppler sensing and Surface Electromyography. This paper describes a novel data collection methodology where these independent streams of information are synchronously acquired with the aim of supporting research and development of a multimodal SSI. The reported recordings were divided into two rounds: a first one where the acquired data was silently uttered and a second round where speakers pronounced the scripted prompts in an audible and normal tone. In the first round of recordings, a total of 53.94 minutes were captured where 30.25% was estimated to be silent speech. In the second round of recordings, a total of 30.45 minutes were obtained and 30.05% of the recordings were audible speech. ER -