Please use this identifier to cite or link to this item:
http://hdl.handle.net/10071/23256| Author(s): | Vieira, D. Freitas, J. D. Acartürk, C. Teixeira, A. Sousa, F Candeias, S. Dias, J. |
| Editor: | Yeliz Yesilada |
| Date: | 1-Jan-2015 |
| Title: | “Read That Article”: Exploring synergies between gaze and speech interaction |
| Pages: | 341 - 342 |
| ISBN: | 9781450334006 |
| DOI (Digital Object Identifier): | 10.1145/2700648.2811369 |
| Keywords: | Multimodal Gaze Speech Fusion |
| Abstract: | Gaze information has the potential to benefit Human-Computer Interaction (HCI) tasks, particularly when combined with speech. Gaze can improve our understanding of the user intention, as a secondary input modality, or it can be used as the main input modality by users with some level of permanent or temporary impairments. In this paper we describe a multimodal HCI system prototype which supports speech, gaze and the combination of both. The system has been developed for Active Assisted Living scenarios. |
| Peerreviewed: | yes |
| Access type: | Open Access |
| Appears in Collections: | ISTAR-CRI - Comunicações a conferências internacionais |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| conferenceObject_42814.pdf | Versão Aceite | 955,54 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.












