Exportar Publicação

A publicação pode ser exportada nos seguintes formatos: referência da APA (American Psychological Association), referência do IEEE (Institute of Electrical and Electronics Engineers), BibTeX e RIS.

Exportar Referência (APA)
Dias, J., Fernandes, J., Tavares, J. & Pedro, S. (2007). Hand Gesture and Speech in a Multimodal Augmented Reality Environment. In Ana Rita Leitão, Miguel Sales Dias, Ricardo Jota (Ed.), Proc The 7th International Workshop on Gesture in Human-Computer Interaction and Simulation 2007. (pp. 54-55). Lisboa: ADETTI.
Exportar Referência (IEEE)
J. M. Dias et al.,  "Hand Gesture and Speech in a Multimodal Augmented Reality Environment", in Proc The 7th Int. Workshop on Gesture in Human-Computer Interaction and Simulation 2007, Ana Rita Leitão, Miguel Sales Dias, Ricardo Jota, Ed., Lisboa, ADETTI, 2007, pp. 54-55
Exportar BibTeX
@inproceedings{dias2007_1732210634768,
	author = "Dias, J. and Fernandes, J. and Tavares, J. and Pedro, S.",
	title = "Hand Gesture and Speech in a Multimodal Augmented Reality Environment",
	booktitle = "Proc The 7th International Workshop on Gesture in Human-Computer Interaction and Simulation 2007",
	year = "2007",
	editor = "Ana Rita Leitão, Miguel Sales Dias, Ricardo Jota",
	volume = "",
	number = "",
	series = "",
	pages = "54-55",
	publisher = "ADETTI",
	address = "Lisboa",
	organization = "ADETTI - Associação para o Desenvolvimento das Telecomunicações e Técnicas de Informática",
	url = "https://www.jvrb.org/old-content/jvrb/pastconferences/PastConferences2007/gw2007/view"
}
Exportar RIS
TY  - CPAPER
TI  - Hand Gesture and Speech in a Multimodal Augmented Reality Environment
T2  - Proc The 7th International Workshop on Gesture in Human-Computer Interaction and Simulation 2007
AU  - Dias, J.
AU  - Fernandes, J.
AU  - Tavares, J.
AU  - Pedro, S.
PY  - 2007
SP  - 54-55
CY  - Lisboa
UR  - https://www.jvrb.org/old-content/jvrb/pastconferences/PastConferences2007/gw2007/view
AB  - Information Technologies (IT) professionals require new ways of interacting with
computers using more natural approaches. A natural paradigm of interaction is one
that doesn’t need any intrusive devices, which may be confusing to users, therefore
distracting them from their main goal. Computer Vision, as an example, has enabled
these professionals to explore new ways for humans to interact with machines and
computers. The adoption of multimodal interfaces in the framework of augmented
reality is one way to address these requirements. The main benefit of using a system
of this kind is the provision of a more transparent, flexible, efficient and expressive
means of human-computer interaction. Since multimodal interfaces offer different
possibilities of interacting with the system, errors and time of action can be reduced,
improving efficiency and effectiveness while executing a certain task. Our work
envisages the creation of a tool for architects and interior designers which allows, via
multimodal interaction (gesture and speech), designers or clients, to visualize the
implementation of real size furniture using augmented reality. The tool is capable of
importing, disposing, moving and rotating virtual furniture objects in a real scenario.
The users are able to take control of all actions with gestures and speech, and to walk
into the augmented scene, seeing it from a variety of angles and distances. This paper
exploits some previously obtained knowledge, namely the MX Toolkit library
[DBS*03]. This library conveys a platform, which allows the programmer to combine
multimodal interfaces with 3D object interaction and visualization, applied to
augmented reality scenarios. Since the final goal of this paper was the creation of an
augmented reality computational application, we have integrated a previously developed
Augmented Reality Authoring tool, based in MX Toolkit, the Plaza [S05]. Plaza is a
3D AR authoring module that allows the user to manipulate and modify 3D objects
loaded from a predefined database either in a VR environment, in an AR scenario or
in both. The proposed logical architecture of the system is depicted in the picture below
. It can be divided into two modules: Plaza, responsible for Augmented Reality
authoring and Speech Recognition and the Gesture Recognition Server, responsible
for Hand Gesture recognition. Both modules use the MX Toolkit library and
communicate through the TCP/IP COM module. The Gesture Recognition Server also
maintains a Gesture Database, which will be used at runtime for gesture matching.
ER  -