Modeling dynamic textures using subspace mixtures

Che Bin Liu, Ruei Sung Lin, Narendra Ahuja

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we aim at modeling video sequences that exhibit temporal appearance variation. The dynamic texture model proposed in [6] is effective to model simple dynamic scenes. However, because of its over-simplified appearance model and under-constrained dynamics model, the visual quality of its synthesized video sequences is often not satisfactory. This leads to our new model. We parameterize the nonlinear image manifold using mixtures of probabilistic principal component analyzers. We then align coefficients from different mixture components in a global coordinate system, and model the image dynamics in the global coordinate using an autoregressive process. The experimental results show that our method is capable of capturing complex temporal appearance variation and offers improved synthesis results over previous works.

Original languageEnglish (US)
Title of host publicationIEEE International Conference on Multimedia and Expo, ICME 2005
Pages1378-1381
Number of pages4
DOIs
StatePublished - Dec 1 2005
EventIEEE International Conference on Multimedia and Expo, ICME 2005 - Amsterdam, Netherlands
Duration: Jul 6 2005Jul 8 2005

Publication series

NameIEEE International Conference on Multimedia and Expo, ICME 2005
Volume2005

Other

OtherIEEE International Conference on Multimedia and Expo, ICME 2005
CountryNetherlands
CityAmsterdam
Period7/6/057/8/05

ASJC Scopus subject areas

  • Engineering(all)

Fingerprint Dive into the research topics of 'Modeling dynamic textures using subspace mixtures'. Together they form a unique fingerprint.

Cite this