Publicação em atas de evento científico
Hand Gesture and Speech in a Multimodal Augmented Reality Environment
Miguel Sales Dias (Dias, J.); João Fernandes (Fernandes, J.); João Tavares (Tavares, J.); Pedro Santos Silva (Pedro, S.);
Proc The 7th International Workshop on Gesture in Human-Computer Interaction and Simulation 2007
Ano (publicação definitiva)
2007
Língua
Inglês
País
Portugal
Mais Informação
Web of Science®

Esta publicação não está indexada na Web of Science®

Scopus

Esta publicação não está indexada na Scopus

Google Scholar

Esta publicação não está indexada no Google Scholar

Abstract/Resumo
Information Technologies (IT) professionals require new ways of interacting with computers using more natural approaches. A natural paradigm of interaction is one that doesn’t need any intrusive devices, which may be confusing to users, therefore distracting them from their main goal. Computer Vision, as an example, has enabled these professionals to explore new ways for humans to interact with machines and computers. The adoption of multimodal interfaces in the framework of augmented reality is one way to address these requirements. The main benefit of using a system of this kind is the provision of a more transparent, flexible, efficient and expressive means of human-computer interaction. Since multimodal interfaces offer different possibilities of interacting with the system, errors and time of action can be reduced, improving efficiency and effectiveness while executing a certain task. Our work envisages the creation of a tool for architects and interior designers which allows, via multimodal interaction (gesture and speech), designers or clients, to visualize the implementation of real size furniture using augmented reality. The tool is capable of importing, disposing, moving and rotating virtual furniture objects in a real scenario. The users are able to take control of all actions with gestures and speech, and to walk into the augmented scene, seeing it from a variety of angles and distances. This paper exploits some previously obtained knowledge, namely the MX Toolkit library [DBS*03]. This library conveys a platform, which allows the programmer to combine multimodal interfaces with 3D object interaction and visualization, applied to augmented reality scenarios. Since the final goal of this paper was the creation of an augmented reality computational application, we have integrated a previously developed Augmented Reality Authoring tool, based in MX Toolkit, the Plaza [S05]. Plaza is a 3D AR authoring module that allows the user to manipulate and modify 3D objects loaded from a predefined database either in a VR environment, in an AR scenario or in both. The proposed logical architecture of the system is depicted in the picture below . It can be divided into two modules: Plaza, responsible for Augmented Reality authoring and Speech Recognition and the Gesture Recognition Server, responsible for Hand Gesture recognition. Both modules use the MX Toolkit library and communicate through the TCP/IP COM module. The Gesture Recognition Server also maintains a Gesture Database, which will be used at runtime for gesture matching.
Agradecimentos/Acknowledgements
--
Palavras-chave
MX Toolkit,Augmented Reality,Plaza Tool,Gesture Recognition,Speech Recognition
  • Ciências da Computação e da Informação - Ciências Naturais
  • Engenharia Eletrotécnica, Eletrónica e Informática - Engenharia e Tecnologia

Com o objetivo de aumentar a investigação direcionada para o cumprimento dos Objetivos do Desenvolvimento Sustentável para 2030 das Nações Unidas, é disponibilizada no Ciência-IUL a possibilidade de associação, quando aplicável, dos artigos científicos aos Objetivos do Desenvolvimento Sustentável. Estes são os Objetivos do Desenvolvimento Sustentável identificados pelo(s) autor(es) para esta publicação. Para uma informação detalhada dos Objetivos do Desenvolvimento Sustentável, clique aqui.