Rhythm measures with language-independent segmentation

Anastassia Loukina, Greg Kochanski, Chilin Shih, Elinor Keane, Ian Watson

Research output: Contribution to journalConference articlepeer-review

Abstract

We compare 15 measures of speech rhythm based on an automatic segmentation of speech into vowel-like and consonant-like regions. This allows us to apply identical segmentation criteria to all languages and to compute rhythm measures over a large corpus. It may also approximate more closely the segmentation available to pre-lexical infants, who apparently can discriminate between languages. We find that within-language variation is large and comparable to the between-languages differences we observed. We evaluate the success of different measures in separating languages and show that the efficiency of measures depends on the languages included in the corpus. Rhythm appears to be described by two dimensions and different published rhythm measures capture different aspects of it.

Original languageEnglish (US)
Pages (from-to)1531-1534
Number of pages4
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
StatePublished - 2009
Event10th Annual Conference of the International Speech Communication Association, INTERSPEECH 2009 - Brighton, United Kingdom
Duration: Sep 6 2009Sep 10 2009

Keywords

  • Linear discriminant typology acoustic phonetics speech segmentation experimental

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Signal Processing
  • Software
  • Sensory Systems

Fingerprint

Dive into the research topics of 'Rhythm measures with language-independent segmentation'. Together they form a unique fingerprint.

Cite this