Publications

Deep context: a neural language model for large-scale networked documents

Abstract

We propose a scalable neural language model that leverages the links between documents to learn the deep context of documents. Our model, Deep Context Vector, takes advantage of distributed representations to exploit the word order in document sentences, as well as the semantic connections among linked documents in a document network. We evaluate our model on large-scale data collections that include Wikipedia pages, and scientific and legal citations networks. We demonstrate its effectiveness and efficiency on document classification and link prediction tasks.

Date
August 19, 2017
Authors
Hao Wu, Kristina Lerman
Book
Proceedings of the 26th International Joint Conference on Artificial Intelligence
Pages
3091-3097