Two experiments for embedding WordNet hierarchy into vector spaces

Jean Philippe Bernardy, Aleksandre Maskharashvili

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we investigate mapping of the WORDNET hyponymy relation to feature vectors. Our aim is to model lexical knowledge in such a way that it can be used as input in generic machine-learning models, such as phrase entailment predictors. We propose two models. The first one leverages an existing mapping of words to feature vectors (fastText), and attempts to classify such vectors as within or outside of each class. The second model is fully supervised, using solely WORDNET as a ground truth. It maps each concept to an interval or a disjunction thereof. The first model approaches but not quite attain state of the art performance. The second model can achieve near-perfect accuracy.

Original languageEnglish (US)
Title of host publicationProceedings of the 10th Global WordNet Conference
EditorsChristiane Fellbaum, Piek Vossen, Ewa Rudnicka, Marek Maziarz, Maciej Piasecki
PublisherOficyna Wydawnicza Politechniki Wroclawskiej
Pages79-84
Number of pages6
ISBN (Electronic)9788374931083
StatePublished - 2020
Externally publishedYes
Event10th Global WordNet Conference, GWC 2019 - Wroclaw, Poland
Duration: Jul 23 2019Jul 27 2019

Publication series

NameProceedings of the 10th Global WordNet Conference

Conference

Conference10th Global WordNet Conference, GWC 2019
Country/TerritoryPoland
CityWroclaw
Period7/23/197/27/19

ASJC Scopus subject areas

  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Two experiments for embedding WordNet hierarchy into vector spaces'. Together they form a unique fingerprint.

Cite this