Enhancing multi-lingual information extraction via cross-media inference and fusion

Adam Lee, Marissa Passantino, Heng Ji, Guojun Qi, Thomas Huang

Research output: Contribution to conferencePaper

Abstract

We describe a new information fusion approach to integrate facts extracted from cross-media objects (videos and texts) into a coherent common representation including multi-level knowledge (concepts, relations and events). Beyond standard information fusion, we exploited video extraction results and significantly improved text Information Extraction. We further extended our methods to multi-lingual environment (English, Arabic and Chinese) by presenting a case study on cross-lingual comparable corpora acquisition based on video comparison.

Original languageEnglish (US)
Pages630-638
Number of pages9
StatePublished - Dec 1 2010
Event23rd International Conference on Computational Linguistics, Coling 2010 - Beijing, China
Duration: Aug 23 2010Aug 27 2010

Other

Other23rd International Conference on Computational Linguistics, Coling 2010
CountryChina
CityBeijing
Period8/23/108/27/10

ASJC Scopus subject areas

  • Language and Linguistics
  • Computational Theory and Mathematics
  • Linguistics and Language

Fingerprint Dive into the research topics of 'Enhancing multi-lingual information extraction via cross-media inference and fusion'. Together they form a unique fingerprint.

  • Cite this

    Lee, A., Passantino, M., Ji, H., Qi, G., & Huang, T. (2010). Enhancing multi-lingual information extraction via cross-media inference and fusion. 630-638. Paper presented at 23rd International Conference on Computational Linguistics, Coling 2010, Beijing, China.