We present an automatic video completion algorithm that synthesizes missing regions in videos in a temporally coherent fashion. Our algorithm can handle dynamic scenes captured using a moving camera. State-of-the-art approaches have difficulties handling such videos because viewpoint changes cause image-space motion vectors in the missing and known regions to be inconsistent. We address this problem by jointly estimating optical flow and color in the missing regions. Using pixel-wise forward/backward flow fields enables us to synthesize temporally coherent colors. We formulate the problem as a non-parametric patch-based optimization. We demonstrate our technique on numerous challenging videos.
|Original language||English (US)|
|Journal||ACM Transactions on Graphics|
|State||Published - Nov 2016|
- Patch-based synthesis
- Video completion
ASJC Scopus subject areas
- Computer Graphics and Computer-Aided Design