Profiling large-vocabulary continuous speech recognition on embedded devices: A hardware resource sensitivity analysis

Kai Yu, Rob A. Rutenbar

Research output: Contribution to journalConference articlepeer-review

Abstract

When deployed in embedded systems, speech recognizers are necessarily reduced from large-vocabulary continuous speech recognizers (LVCSR) found on desktops or servers to fit the limited hardware. However, embedded hardware continues to evolve in capability; today's smartphones are vastly more powerful than their recent ancestors. This begets a new question: which hardware features not currently found on today's embedded platforms, but potentially add-ons to tomorrow's devices, are most likely to improve recognition performance? Said differently - what is the sensitivity of the recognizer to fine-grain details of the embedded hardware resources? To answer this question rigorously and quantitatively, we offer results from a detailed study of LVCSR performance as a function of microarchitecture options on an embedded ARM11 and an enterprise-class Intel Core2Duo. We estimate speed and energy consumption, and show, feature by feature, how hardware resources impact recognizer performance.

Original languageEnglish (US)
Pages (from-to)1923-1926
Number of pages4
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
StatePublished - 2009
Externally publishedYes
Event10th Annual Conference of the International Speech Communication Association, INTERSPEECH 2009 - Brighton, United Kingdom
Duration: Sep 6 2009Sep 10 2009

Keywords

  • Hardware profiling
  • Software performance
  • Speech recognition

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Signal Processing
  • Software
  • Sensory Systems

Fingerprint

Dive into the research topics of 'Profiling large-vocabulary continuous speech recognition on embedded devices: A hardware resource sensitivity analysis'. Together they form a unique fingerprint.

Cite this