SceneGen: Learning To Generate Realistic Traffic Scenes

Shuhan Tan, Kelvin Wong, Shenlong Wang, Sivabalan Manivasagam, Mengye Ren, Raquel Urtasun

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We consider the problem of generating realistic traffic scenes automatically. Existing methods typically insert actors into the scene according to a set of hand-crafted heuristics and are limited in their ability to model the true complexity and diversity of real traffic scenes, thus inducing a content gap between synthesized traffic scenes versus real ones. As a result, existing simulators lack the fidelity necessary to train and test self-driving vehicles. To address this limitation, we present SceneGen, a neural autoregressive model of traffic scenes that eschews the need for rules and heuristics. In particular, given the ego-vehicle state and a high definition map of surrounding area, SceneGen inserts actors of various classes into the scene and synthesizes their sizes, orientations, and velocities. We demonstrate on two large-scale datasets SceneGen's ability to faithfully model distributions of real traffic scenes. Moreover, we show that SceneGen coupled with sensor simulation can be used to train perception models that generalize to the real world.
Original languageEnglish (US)
Title of host publicationProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Pages892-901
Number of pages10
StatePublished - Jun 1 2021
Externally publishedYes

Fingerprint

Dive into the research topics of 'SceneGen: Learning To Generate Realistic Traffic Scenes'. Together they form a unique fingerprint.

Cite this