Abstract

A zoo of deep nets is available these days for almost any given task, and it is increasingly unclear which net to start with when addressing a new task, or which net to use as an initialization for fine-tuning a new model. To address this issue, in this paper, we develop knowledge flow which moves 'knowledge' from multiple deep nets, referred to as teachers, to a new deep net model, called the student. The structure of the teachers and the student can differ arbitrarily and they can be trained on entirely different tasks with different output spaces too. Upon training with knowledge flow the student is independent of the teachers. We demonstrate our approach on a variety of supervised and reinforcement learning tasks, outperforming fine-tuning and other 'knowledge exchange' methods.

Original languageEnglish (US)
StatePublished - Jan 1 2019
Event7th International Conference on Learning Representations, ICLR 2019 - New Orleans, United States
Duration: May 6 2019May 9 2019

Conference

Conference7th International Conference on Learning Representations, ICLR 2019
CountryUnited States
CityNew Orleans
Period5/6/195/9/19

Fingerprint

Students
teacher
Tuning
knowledge
student
Supervised learning
Reinforcement learning
reinforcement
learning
Zoo
Reinforcement Learning

ASJC Scopus subject areas

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Cite this

Liu, I. J., Peng, J., & Schwing, A. G. (2019). Knowledge flow: Improve upon your teachers. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.

Knowledge flow : Improve upon your teachers. / Liu, Iou Jen; Peng, Jian; Schwing, Alexander Gerhard.

2019. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.

Research output: Contribution to conferencePaper

Liu, IJ, Peng, J & Schwing, AG 2019, 'Knowledge flow: Improve upon your teachers' Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States, 5/6/19 - 5/9/19, .
Liu IJ, Peng J, Schwing AG. Knowledge flow: Improve upon your teachers. 2019. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.
Liu, Iou Jen ; Peng, Jian ; Schwing, Alexander Gerhard. / Knowledge flow : Improve upon your teachers. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.
@conference{6871bd55ff974199ac4e8ab4a29a0ffb,
title = "Knowledge flow: Improve upon your teachers",
abstract = "A zoo of deep nets is available these days for almost any given task, and it is increasingly unclear which net to start with when addressing a new task, or which net to use as an initialization for fine-tuning a new model. To address this issue, in this paper, we develop knowledge flow which moves 'knowledge' from multiple deep nets, referred to as teachers, to a new deep net model, called the student. The structure of the teachers and the student can differ arbitrarily and they can be trained on entirely different tasks with different output spaces too. Upon training with knowledge flow the student is independent of the teachers. We demonstrate our approach on a variety of supervised and reinforcement learning tasks, outperforming fine-tuning and other 'knowledge exchange' methods.",
author = "Liu, {Iou Jen} and Jian Peng and Schwing, {Alexander Gerhard}",
year = "2019",
month = "1",
day = "1",
language = "English (US)",
note = "7th International Conference on Learning Representations, ICLR 2019 ; Conference date: 06-05-2019 Through 09-05-2019",

}

TY - CONF

T1 - Knowledge flow

T2 - Improve upon your teachers

AU - Liu, Iou Jen

AU - Peng, Jian

AU - Schwing, Alexander Gerhard

PY - 2019/1/1

Y1 - 2019/1/1

N2 - A zoo of deep nets is available these days for almost any given task, and it is increasingly unclear which net to start with when addressing a new task, or which net to use as an initialization for fine-tuning a new model. To address this issue, in this paper, we develop knowledge flow which moves 'knowledge' from multiple deep nets, referred to as teachers, to a new deep net model, called the student. The structure of the teachers and the student can differ arbitrarily and they can be trained on entirely different tasks with different output spaces too. Upon training with knowledge flow the student is independent of the teachers. We demonstrate our approach on a variety of supervised and reinforcement learning tasks, outperforming fine-tuning and other 'knowledge exchange' methods.

AB - A zoo of deep nets is available these days for almost any given task, and it is increasingly unclear which net to start with when addressing a new task, or which net to use as an initialization for fine-tuning a new model. To address this issue, in this paper, we develop knowledge flow which moves 'knowledge' from multiple deep nets, referred to as teachers, to a new deep net model, called the student. The structure of the teachers and the student can differ arbitrarily and they can be trained on entirely different tasks with different output spaces too. Upon training with knowledge flow the student is independent of the teachers. We demonstrate our approach on a variety of supervised and reinforcement learning tasks, outperforming fine-tuning and other 'knowledge exchange' methods.

UR - http://www.scopus.com/inward/record.url?scp=85071166019&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85071166019&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:85071166019

ER -