Learning Type-Aware Embeddings for Fashion Compatibility

Mariya I. Vasileva, Bryan A. Plummer, Krishna Dusad, Shreya Rajpal, Ranjitha Kumar, David Alexander Forsyth

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Outfits in online fashion data are composed of items of many different types (e.g. top, bottom, shoes) that share some stylistic relationship with one another. A representation for building outfits requires a method that can learn both notions of similarity (for example, when two tops are interchangeable) and compatibility (items of possibly different type that can go together in an outfit). This paper presents an approach to learning an image embedding that respects item type, and jointly learns notions of item similarity and compatibility in an end-to-end model. To evaluate the learned representation, we crawled 68,306 outfits created by users on the Polyvore website. Our approach obtains 3–5% improvement over the state-of-the-art on outfit compatibility prediction and fill-in-the-blank tasks using our dataset, as well as an established smaller dataset, while supporting a variety of useful queries (Code and data: https://github.com/mvasil/fashion-compatibility ).

Original languageEnglish (US)
Title of host publicationComputer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings
EditorsYair Weiss, Vittorio Ferrari, Cristian Sminchisescu, Martial Hebert
PublisherSpringer-Verlag
Pages405-421
Number of pages17
ISBN (Print)9783030012694
DOIs
StatePublished - Jan 1 2018
Event15th European Conference on Computer Vision, ECCV 2018 - Munich, Germany
Duration: Sep 8 2018Sep 14 2018

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11220 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other15th European Conference on Computer Vision, ECCV 2018
CountryGermany
CityMunich
Period9/8/189/14/18

Fingerprint

Compatibility
Websites
Query
Learning
Evaluate
Prediction
Similarity
Model

Keywords

  • Appearance representations
  • Embedding methods
  • Fashion

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Vasileva, M. I., Plummer, B. A., Dusad, K., Rajpal, S., Kumar, R., & Forsyth, D. A. (2018). Learning Type-Aware Embeddings for Fashion Compatibility. In Y. Weiss, V. Ferrari, C. Sminchisescu, & M. Hebert (Eds.), Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings (pp. 405-421). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11220 LNCS). Springer-Verlag. https://doi.org/10.1007/978-3-030-01270-0_24

Learning Type-Aware Embeddings for Fashion Compatibility. / Vasileva, Mariya I.; Plummer, Bryan A.; Dusad, Krishna; Rajpal, Shreya; Kumar, Ranjitha; Forsyth, David Alexander.

Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings. ed. / Yair Weiss; Vittorio Ferrari; Cristian Sminchisescu; Martial Hebert. Springer-Verlag, 2018. p. 405-421 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11220 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Vasileva, MI, Plummer, BA, Dusad, K, Rajpal, S, Kumar, R & Forsyth, DA 2018, Learning Type-Aware Embeddings for Fashion Compatibility. in Y Weiss, V Ferrari, C Sminchisescu & M Hebert (eds), Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11220 LNCS, Springer-Verlag, pp. 405-421, 15th European Conference on Computer Vision, ECCV 2018, Munich, Germany, 9/8/18. https://doi.org/10.1007/978-3-030-01270-0_24
Vasileva MI, Plummer BA, Dusad K, Rajpal S, Kumar R, Forsyth DA. Learning Type-Aware Embeddings for Fashion Compatibility. In Weiss Y, Ferrari V, Sminchisescu C, Hebert M, editors, Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings. Springer-Verlag. 2018. p. 405-421. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-030-01270-0_24
Vasileva, Mariya I. ; Plummer, Bryan A. ; Dusad, Krishna ; Rajpal, Shreya ; Kumar, Ranjitha ; Forsyth, David Alexander. / Learning Type-Aware Embeddings for Fashion Compatibility. Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings. editor / Yair Weiss ; Vittorio Ferrari ; Cristian Sminchisescu ; Martial Hebert. Springer-Verlag, 2018. pp. 405-421 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{2735565773d14b8fb8124a8594ab7297,
title = "Learning Type-Aware Embeddings for Fashion Compatibility",
abstract = "Outfits in online fashion data are composed of items of many different types (e.g. top, bottom, shoes) that share some stylistic relationship with one another. A representation for building outfits requires a method that can learn both notions of similarity (for example, when two tops are interchangeable) and compatibility (items of possibly different type that can go together in an outfit). This paper presents an approach to learning an image embedding that respects item type, and jointly learns notions of item similarity and compatibility in an end-to-end model. To evaluate the learned representation, we crawled 68,306 outfits created by users on the Polyvore website. Our approach obtains 3–5{\%} improvement over the state-of-the-art on outfit compatibility prediction and fill-in-the-blank tasks using our dataset, as well as an established smaller dataset, while supporting a variety of useful queries (Code and data: https://github.com/mvasil/fashion-compatibility ).",
keywords = "Appearance representations, Embedding methods, Fashion",
author = "Vasileva, {Mariya I.} and Plummer, {Bryan A.} and Krishna Dusad and Shreya Rajpal and Ranjitha Kumar and Forsyth, {David Alexander}",
year = "2018",
month = "1",
day = "1",
doi = "10.1007/978-3-030-01270-0_24",
language = "English (US)",
isbn = "9783030012694",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer-Verlag",
pages = "405--421",
editor = "Yair Weiss and Vittorio Ferrari and Cristian Sminchisescu and Martial Hebert",
booktitle = "Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings",

}

TY - GEN

T1 - Learning Type-Aware Embeddings for Fashion Compatibility

AU - Vasileva, Mariya I.

AU - Plummer, Bryan A.

AU - Dusad, Krishna

AU - Rajpal, Shreya

AU - Kumar, Ranjitha

AU - Forsyth, David Alexander

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Outfits in online fashion data are composed of items of many different types (e.g. top, bottom, shoes) that share some stylistic relationship with one another. A representation for building outfits requires a method that can learn both notions of similarity (for example, when two tops are interchangeable) and compatibility (items of possibly different type that can go together in an outfit). This paper presents an approach to learning an image embedding that respects item type, and jointly learns notions of item similarity and compatibility in an end-to-end model. To evaluate the learned representation, we crawled 68,306 outfits created by users on the Polyvore website. Our approach obtains 3–5% improvement over the state-of-the-art on outfit compatibility prediction and fill-in-the-blank tasks using our dataset, as well as an established smaller dataset, while supporting a variety of useful queries (Code and data: https://github.com/mvasil/fashion-compatibility ).

AB - Outfits in online fashion data are composed of items of many different types (e.g. top, bottom, shoes) that share some stylistic relationship with one another. A representation for building outfits requires a method that can learn both notions of similarity (for example, when two tops are interchangeable) and compatibility (items of possibly different type that can go together in an outfit). This paper presents an approach to learning an image embedding that respects item type, and jointly learns notions of item similarity and compatibility in an end-to-end model. To evaluate the learned representation, we crawled 68,306 outfits created by users on the Polyvore website. Our approach obtains 3–5% improvement over the state-of-the-art on outfit compatibility prediction and fill-in-the-blank tasks using our dataset, as well as an established smaller dataset, while supporting a variety of useful queries (Code and data: https://github.com/mvasil/fashion-compatibility ).

KW - Appearance representations

KW - Embedding methods

KW - Fashion

UR - http://www.scopus.com/inward/record.url?scp=85055095935&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85055095935&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-01270-0_24

DO - 10.1007/978-3-030-01270-0_24

M3 - Conference contribution

SN - 9783030012694

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 405

EP - 421

BT - Computer Vision – ECCV 2018 - 15th European Conference, 2018, Proceedings

A2 - Weiss, Yair

A2 - Ferrari, Vittorio

A2 - Sminchisescu, Cristian

A2 - Hebert, Martial

PB - Springer-Verlag

ER -