Publication in conference proceedings
Enhancing multimodal silent speech interfaces with feature selection
João Freitas (Freitas, J.); António Teixeira (Teixeira, A.); Miguel Sales Dias (Dias, J.); Artur Ferreira (Ferreira, A.); Mário A. T. Figueiredo (Figueiredo, M.A.T.);
15th Annual Conference of the International Speech Communication Association (INTERSPEECH 2014), Proceedings
Year (definitive publication)
2014
Language
English
Country
Singapore
More Information
Web of Science®

Times Cited: 9

(Last checked: 2024-05-19 10:01)

View record in Web of Science®

Scopus

Times Cited: 15

(Last checked: 2024-05-15 07:30)

View record in Scopus


: 0.9
Google Scholar

Times Cited: 24

(Last checked: 2024-05-19 01:02)

View record in Google Scholar

Abstract
In research on Silent Speech Interfaces (SSI), different sources of information (modalities) have been combined, aiming at obtaining better performance than the individual modalities. However, when combining these modalities, the dimensionality of the feature space rapidly increases, yielding the well-known "curse of dimensionality". As a consequence, in order to extract useful information from this data, one has to resort to feature selection (FS) techniques to lower the dimensionality of the learning space. In this paper, we assess the impact of FS techniques for silent speech data, in a dataset with 4 non-invasive and promising modalities, namely: video, depth, ultrasonic Doppler sensing, and surface electromyography. We consider two supervised (mutual information and Fisher's ratio) and two unsupervised (meanmedian and arithmetic mean geometric mean) FS filters. The evaluation was made by assessing the classification accuracy (word recognition error) of three well-known classifiers (knearest neighbors, support vector machines, and dynamic time warping). The key results of this study show that both unsupervised and supervised FS techniques improve on the classification accuracy on both individual and combined modalities. For instance, on the video component, we attain relative performance gains of 36.2% in error rates. FS is also useful as pre-processing for feature fusion
Acknowledgements
--
Keywords
Multimodal,Silent speech interfaces,Supervised classification,Feature extraction
  • Computer and Information Sciences - Natural Sciences
  • Electrical Engineering, Electronic Engineering, Information Engineering - Engineering and Technology
Funding Records
Funding Reference Funding Entity
FCT-PEst-C/EEI/UI0127/2011 Fundação para a Ciência e a Tecnologia

With the objective to increase the research activity directed towards the achievement of the United Nations 2030 Sustainable Development Goals, the possibility of associating scientific publications with the Sustainable Development Goals is now available in Ciência-IUL. These are the Sustainable Development Goals identified by the author(s) for this publication. For more detailed information on the Sustainable Development Goals, click here.