Speech/gesture interface to a visual-computing environment

Rajeev Sharma, Michael Zeller, Vladimir I. Pavlovic, Thomas S. Huang, Zion Lo, Stephen Chu, Yunxin Zhao, James C. Phillips, Klaus J Schulten

Research output: Contribution to journalArticle

Abstract

A bimodal speech/gesture interface that lets researchers interact with 3D graphical objects in a virtual environment using spoken words and simple, free hand gestures is presented. In this initial implementation, new users needed only a few minutes to get acquainted with the setup. Every user reported that working with the interface is more convenient and, in most cases, more efficient than the traditional interaction with visual molecular dynamics.

Original languageEnglish (US)
Pages (from-to)29-37
Number of pages9
JournalIEEE Computer Graphics and Applications
Volume20
Issue number2
DOIs
StatePublished - Mar 1 2000

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design

Fingerprint Dive into the research topics of 'Speech/gesture interface to a visual-computing environment'. Together they form a unique fingerprint.

  • Cite this

    Sharma, R., Zeller, M., Pavlovic, V. I., Huang, T. S., Lo, Z., Chu, S., Zhao, Y., Phillips, J. C., & Schulten, K. J. (2000). Speech/gesture interface to a visual-computing environment. IEEE Computer Graphics and Applications, 20(2), 29-37. https://doi.org/10.1109/38.824531