TY - GEN
T1 - Representing documents via latent keyphrase inference
AU - Liu, Jialu
AU - Ren, Xiang
AU - Shang, Jingbo
AU - Cassidy, Taylor
AU - Voss, Clare R.
AU - Han, Jiawei
PY - 2016
Y1 - 2016
N2 - Many text mining approaches adopt bag-of-words or n-grams models to represent documents. Looking beyond just the words, i.e., the explicit surface forms, in a document can improve a computer's understanding of text. Being aware of this, researchers have proposed concept-based models that rely on a human-curated knowledge base to incorporate other related concepts in the document representation. But these methods are not desirable when applied to vertical domains (e.g., literature, enterprise, etc.) due to low coverage of in-domain concepts in the general knowledge base and interference from out-of-domain concepts. In this paper, we propose a data-driven model named Latent Keyphrase In-ference (LAKI ) that represents documents with a vector of closely related domain keyphrases instead of single words or existing concepts in the knowledge base. We show that given a corpus of in-domain documents, topical content units can be learned for each domain keyphrase, which enables a computer to do smart inference to discover latent document keyphrases, going beyond just explicit mentions. Compared with the state-of-Art document representation approaches, LAKI fills the gap between bag-of-words and concept-based models by using domain keyphrases as the basic representation unit. It removes dependency on a knowledge base while providing, with keyphrases, readily interpretable representations. When evaluated against 8 other methods on two text mining tasks over two corpora, LAKI outperformed all.
AB - Many text mining approaches adopt bag-of-words or n-grams models to represent documents. Looking beyond just the words, i.e., the explicit surface forms, in a document can improve a computer's understanding of text. Being aware of this, researchers have proposed concept-based models that rely on a human-curated knowledge base to incorporate other related concepts in the document representation. But these methods are not desirable when applied to vertical domains (e.g., literature, enterprise, etc.) due to low coverage of in-domain concepts in the general knowledge base and interference from out-of-domain concepts. In this paper, we propose a data-driven model named Latent Keyphrase In-ference (LAKI ) that represents documents with a vector of closely related domain keyphrases instead of single words or existing concepts in the knowledge base. We show that given a corpus of in-domain documents, topical content units can be learned for each domain keyphrase, which enables a computer to do smart inference to discover latent document keyphrases, going beyond just explicit mentions. Compared with the state-of-Art document representation approaches, LAKI fills the gap between bag-of-words and concept-based models by using domain keyphrases as the basic representation unit. It removes dependency on a knowledge base while providing, with keyphrases, readily interpretable representations. When evaluated against 8 other methods on two text mining tasks over two corpora, LAKI outperformed all.
UR - http://www.scopus.com/inward/record.url?scp=84996561194&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84996561194&partnerID=8YFLogxK
U2 - 10.1145/2872427.2883088
DO - 10.1145/2872427.2883088
M3 - Conference contribution
C2 - 28229132
AN - SCOPUS:84996561194
T3 - 25th International World Wide Web Conference, WWW 2016
SP - 1057
EP - 1067
BT - 25th International World Wide Web Conference, WWW 2016
PB - International World Wide Web Conferences Steering Committee
T2 - 25th International World Wide Web Conference, WWW 2016
Y2 - 11 April 2016 through 15 April 2016
ER -