Exportar Publicação

A publicação pode ser exportada nos seguintes formatos: referência da APA (American Psychological Association), referência do IEEE (Institute of Electrical and Electronics Engineers), BibTeX e RIS.

Exportar Referência (APA)
Vieira, D., Freitas, J. D., Acartürk, C., Teixeira, A., Sousa, F, Candeias, S....Dias, J. (2015). “Read That Article”: Exploring synergies between gaze and speech interaction. In Yeliz Yesilada (Ed.), ASSETS '15: Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility. (pp. 341-342). Lisboa: ACM Press.
Exportar Referência (IEEE)
D. Vieira et al.,  "“Read That Article”: Exploring synergies between gaze and speech interaction", in ASSETS '15: Proc. of the 17th Int. ACM SIGACCESS Conf. on Computers & Accessibility, Yeliz Yesilada, Ed., Lisboa, ACM Press, 2015, pp. 341-342
Exportar BibTeX
@inproceedings{vieira2015_1721646859860,
	author = "Vieira, D. and Freitas, J. D. and Acartürk, C. and Teixeira, A. and Sousa, F and Candeias, S. and Dias, J.",
	title = "“Read That Article”: Exploring synergies between gaze and speech interaction",
	booktitle = "ASSETS '15: Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility",
	year = "2015",
	editor = "Yeliz Yesilada",
	volume = "",
	number = "",
	series = "",
	doi = "10.1145/2700648.2811369",
	pages = "341-342",
	publisher = "ACM Press",
	address = "Lisboa",
	organization = "SIGACCESS, ACM Special Interest Group on Accessible Computing",
	url = "https://dl.acm.org/citation.cfm?id=2811369"
}
Exportar RIS
TY  - CPAPER
TI  - “Read That Article”: Exploring synergies between gaze and speech interaction
T2  - ASSETS '15: Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility
AU  - Vieira, D.
AU  - Freitas, J. D.
AU  - Acartürk, C.
AU  - Teixeira, A.
AU  - Sousa, F
AU  - Candeias, S.
AU  - Dias, J.
PY  - 2015
SP  - 341-342
DO  - 10.1145/2700648.2811369
CY  - Lisboa
UR  - https://dl.acm.org/citation.cfm?id=2811369
AB  - Gaze information has the potential to benefit Human-Computer Interaction (HCI) tasks, particularly when combined with speech. Gaze can improve our understanding of the user intention, as a secondary input modality, or it can be used as the main input modality by users with some level of permanent or temporary impairments. In this paper we describe a multimodal HCI system prototype which supports speech, gaze and the combination of both. The system has been developed for Active Assisted Living scenarios.
ER  -