Real-world font recognition using deep network and domain adaptation

Zhangyang Wang, Thomas S. Huang, Jianchao Yang, Hailin Jin, Eli Shechtman, Aseem Agarwala, Jonathan Brandt

Research output: Contribution to conferencePaper

Abstract

We address a challenging fine-grain classification problem: recognizing a font style from an image of text. In this task, it is very easy to generate lots of rendered font examples but very hard to obtain real-world labeled images. This real-to-synthetic domain gap caused poor generalization to new real data in previous methods (Chen et al. (2014)). In this paper, we refer to Convolutional Neural Networks, and use an adaptation technique based on a Stacked Convolutional Auto-Encoder that exploits unlabeled real-world images combined with synthetic data. The proposed method achieves an accuracy of higher than 80% (top-5) on a real-world dataset.

Original languageEnglish (US)
StatePublished - Jan 1 2015
Event3rd International Conference on Learning Representations, ICLR 2015 - San Diego, United States
Duration: May 7 2015May 9 2015

Conference

Conference3rd International Conference on Learning Representations, ICLR 2015
CountryUnited States
CitySan Diego
Period5/7/155/9/15

ASJC Scopus subject areas

  • Education
  • Linguistics and Language
  • Language and Linguistics
  • Computer Science Applications

Fingerprint Dive into the research topics of 'Real-world font recognition using deep network and domain adaptation'. Together they form a unique fingerprint.

  • Cite this

    Wang, Z., Huang, T. S., Yang, J., Jin, H., Shechtman, E., Agarwala, A., & Brandt, J. (2015). Real-world font recognition using deep network and domain adaptation. Paper presented at 3rd International Conference on Learning Representations, ICLR 2015, San Diego, United States.