TY - GEN
T1 - Speech/gesture interface to a visual computing environment for molecular biologists
AU - Sharma, Rajeev
AU - Huang, Thomas S.
AU - Pavlovic, Vladimir I.
AU - Zhao, Yunxin
AU - Lo, Zion
AU - Chu, Stephen
AU - Schulten, Klaus
AU - Dalke, Andrew
AU - Phillips, Jim
AU - Zeller, Michael
AU - Humphrey, William
N1 - Copyright:
Copyright 2014 Elsevier B.V., All rights reserved.
PY - 1996
Y1 - 1996
N2 - Recent progress in 3-D, immersive display and virtual reality (VR) technologies has made possible many exciting applications, for example interactive visualization of complex scientific data. To fully exploit this potential there is a need for "natural" interfaces that allow the manipulation of such displays without cumbersome attachments. In this paper we describe the use of visual hand gesture analysis and speech recognition for developing a speech/gesture interface for controlling a 3-D display. The interface enhances an existing application, VMD, which is a VR visual computing environment far molecular biologists. The free hand gestures are used for manipulating the 3-D graphical display together with a set of speech commands. We describe the visual gesture analysis and the speech analysis techniques used in developing this interface. The dual modality of speech/gesture is found to greatly aid the interaction capability.
AB - Recent progress in 3-D, immersive display and virtual reality (VR) technologies has made possible many exciting applications, for example interactive visualization of complex scientific data. To fully exploit this potential there is a need for "natural" interfaces that allow the manipulation of such displays without cumbersome attachments. In this paper we describe the use of visual hand gesture analysis and speech recognition for developing a speech/gesture interface for controlling a 3-D display. The interface enhances an existing application, VMD, which is a VR visual computing environment far molecular biologists. The free hand gestures are used for manipulating the 3-D graphical display together with a set of speech commands. We describe the visual gesture analysis and the speech analysis techniques used in developing this interface. The dual modality of speech/gesture is found to greatly aid the interaction capability.
UR - http://www.scopus.com/inward/record.url?scp=84898825920&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84898825920&partnerID=8YFLogxK
U2 - 10.1109/ICPR.1996.547311
DO - 10.1109/ICPR.1996.547311
M3 - Conference contribution
AN - SCOPUS:84898825920
SN - 081867282X
SN - 9780818672828
T3 - Proceedings - International Conference on Pattern Recognition
SP - 964
EP - 968
BT - Track C
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 13th International Conference on Pattern Recognition, ICPR 1996
Y2 - 25 August 1996 through 29 August 1996
ER -