Generative Modeling for Multi-task Visual Learning

Zhipeng Bao, Martial Hebert, Yu-Xiong Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Generative modeling has recently shown great promise in computer vision, but it has mostly focused on synthesizing visually realistic images. In this paper, motivated by multi-task learning of shareable feature representations, we consider a novel problem of learning a shared generative model that is useful across various visual perception tasks. Correspondingly, we propose a general multi-task oriented generative modeling (MGM) framework, by coupling a discriminative multi-task network with a generative network. While it is challenging to synthesize both RGB images and pixel-level annotations in multi-task scenarios, our framework enables us to use synthesized images paired with only weak annotations (i.e., image-level scene labels) to facilitate multiple visual tasks. Experimental evaluation on challenging multi-task benchmarks, including NYUv2 and Taskonomy, demonstrates that our MGM framework improves the performance of all the tasks by large margins, consistently outperforming state-of-the-art multi-task approaches in different sample-size regimes.
Original languageEnglish (US)
Title of host publicationProceedings of the 39th International Conference on Machine Learning
EditorsKamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, Sivan Sabato
PublisherPMLR
Pages1537-1554
Number of pages18
Volume162
StatePublished - May 1 2022

Publication series

NameProceedings of Machine Learning Research
PublisherPMLR

Fingerprint

Dive into the research topics of 'Generative Modeling for Multi-task Visual Learning'. Together they form a unique fingerprint.

Cite this