Rhythm measures with language-independent segmentation

Anastassia Loukina, Greg Kochanski, Chilin Shih, Elinor Keane, Ian Watson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We compare 15 measures of speech rhythm based on an automatic segmentation of speech into vowel-like and consonant-like regions. This allows us to apply identical segmentation criteria to all languages and to compute rhythm measures over a large corpus. It may also approximate more closely the segmentation available to pre-lexical infants, who apparently can discriminate between languages. We find that within-language variation is large and comparable to the between-languages differences we observed. We evaluate the success of different measures in separating languages and show that the efficiency of measures depends on the languages included in the corpus. Rhythm appears to be described by two dimensions and different published rhythm measures capture different aspects of it.

Original languageEnglish (US)
Title of host publicationProceedings of interspeech 2009
Subtitle of host publicationSpeech and Intelligence
Pages1531-1534
Number of pages4
StatePublished - 2009

Keywords

  • Linear discriminant typology acoustic phonetics speech segmentation experimental

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Signal Processing
  • Software
  • Sensory Systems

Fingerprint Dive into the research topics of 'Rhythm measures with language-independent segmentation'. Together they form a unique fingerprint.

Cite this