Roto texture: Automated tools for texturing raw video

Hui Fang, John C Hart

Research output: Contribution to journalArticle

Abstract

We propose a video editing system that allows a user to apply a time-coherent texture to a surface depicted in the raw video from a single uncalibrated camera, including the surface texture mapping of a texture image and the surface texture synthesis from a texture swatch. Our system avoids the construction of a 3D shape model and instead uses the recovered normal field to deform the texture so that it plausibly adheres to the undulations of the depicted surface. The texture mapping method uses the nonlinear least-squares optimization of a spring model to control the behavior of the texture image as it is deformed to match the evolving normal field through the video. The texture synthesis method uses a coarse optical flow to advect clusters of pixels corresponding to patches of similarly oriented surface points. These clusters are organized into a minimum advection tree to account for the dynamic visibility of clusters. We take a rather crude approach to normal recovering and optical flow estimation, yet the results are robust and plausible for nearly diffuse surfaces such as faces and t-shirts.

Original languageEnglish (US)
Article number1703377
Pages (from-to)1580-1589
Number of pages10
JournalIEEE Transactions on Visualization and Computer Graphics
Volume12
Issue number6
DOIs
StatePublished - Nov 1 2006

Fingerprint

Texturing
Textures
Optical flows
Advection
Visibility
Pixels
Cameras

Keywords

  • Shape from shading
  • Texture synthesis
  • Video editing

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Computer Graphics and Computer-Aided Design

Cite this

Roto texture : Automated tools for texturing raw video. / Fang, Hui; Hart, John C.

In: IEEE Transactions on Visualization and Computer Graphics, Vol. 12, No. 6, 1703377, 01.11.2006, p. 1580-1589.

Research output: Contribution to journalArticle

@article{ac44ddf671f7453baf1bca372a862db3,
title = "Roto texture: Automated tools for texturing raw video",
abstract = "We propose a video editing system that allows a user to apply a time-coherent texture to a surface depicted in the raw video from a single uncalibrated camera, including the surface texture mapping of a texture image and the surface texture synthesis from a texture swatch. Our system avoids the construction of a 3D shape model and instead uses the recovered normal field to deform the texture so that it plausibly adheres to the undulations of the depicted surface. The texture mapping method uses the nonlinear least-squares optimization of a spring model to control the behavior of the texture image as it is deformed to match the evolving normal field through the video. The texture synthesis method uses a coarse optical flow to advect clusters of pixels corresponding to patches of similarly oriented surface points. These clusters are organized into a minimum advection tree to account for the dynamic visibility of clusters. We take a rather crude approach to normal recovering and optical flow estimation, yet the results are robust and plausible for nearly diffuse surfaces such as faces and t-shirts.",
keywords = "Shape from shading, Texture synthesis, Video editing",
author = "Hui Fang and Hart, {John C}",
year = "2006",
month = "11",
day = "1",
doi = "10.1109/TVCG.2006.102",
language = "English (US)",
volume = "12",
pages = "1580--1589",
journal = "IEEE Transactions on Visualization and Computer Graphics",
issn = "1077-2626",
publisher = "IEEE Computer Society",
number = "6",

}

TY - JOUR

T1 - Roto texture

T2 - Automated tools for texturing raw video

AU - Fang, Hui

AU - Hart, John C

PY - 2006/11/1

Y1 - 2006/11/1

N2 - We propose a video editing system that allows a user to apply a time-coherent texture to a surface depicted in the raw video from a single uncalibrated camera, including the surface texture mapping of a texture image and the surface texture synthesis from a texture swatch. Our system avoids the construction of a 3D shape model and instead uses the recovered normal field to deform the texture so that it plausibly adheres to the undulations of the depicted surface. The texture mapping method uses the nonlinear least-squares optimization of a spring model to control the behavior of the texture image as it is deformed to match the evolving normal field through the video. The texture synthesis method uses a coarse optical flow to advect clusters of pixels corresponding to patches of similarly oriented surface points. These clusters are organized into a minimum advection tree to account for the dynamic visibility of clusters. We take a rather crude approach to normal recovering and optical flow estimation, yet the results are robust and plausible for nearly diffuse surfaces such as faces and t-shirts.

AB - We propose a video editing system that allows a user to apply a time-coherent texture to a surface depicted in the raw video from a single uncalibrated camera, including the surface texture mapping of a texture image and the surface texture synthesis from a texture swatch. Our system avoids the construction of a 3D shape model and instead uses the recovered normal field to deform the texture so that it plausibly adheres to the undulations of the depicted surface. The texture mapping method uses the nonlinear least-squares optimization of a spring model to control the behavior of the texture image as it is deformed to match the evolving normal field through the video. The texture synthesis method uses a coarse optical flow to advect clusters of pixels corresponding to patches of similarly oriented surface points. These clusters are organized into a minimum advection tree to account for the dynamic visibility of clusters. We take a rather crude approach to normal recovering and optical flow estimation, yet the results are robust and plausible for nearly diffuse surfaces such as faces and t-shirts.

KW - Shape from shading

KW - Texture synthesis

KW - Video editing

UR - http://www.scopus.com/inward/record.url?scp=33749537228&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33749537228&partnerID=8YFLogxK

U2 - 10.1109/TVCG.2006.102

DO - 10.1109/TVCG.2006.102

M3 - Article

C2 - 17073379

AN - SCOPUS:33749537228

VL - 12

SP - 1580

EP - 1589

JO - IEEE Transactions on Visualization and Computer Graphics

JF - IEEE Transactions on Visualization and Computer Graphics

SN - 1077-2626

IS - 6

M1 - 1703377

ER -