Regularizing knowledge graph embeddings via equivalence and inversion axioms

View/ Open
Date
2017-12-30Author
Minervini, Pasquale
Costabello, Luca
Muñoz, Emir
Nováček, Vít
Vandenbussche, Pierre-Yves
Metadata
Show full item recordUsage
This item's downloads: 571 (view details)
Cited 16 times in Scopus (view citations)
Recommended Citation
Minervini P., Costabello L., Muñoz E., Nováček V., Vandenbussche PY. (2017) Regularizing Knowledge Graph Embeddings via Equivalence and Inversion Axioms. In: Ceci M., Hollmén J., Todorovski L., Vens C., Džeroski S. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2017. Lecture Notes in Computer Science, vol 10534. Springer, Cham
Published Version
Abstract
Learning embeddings of entities and relations using neural architectures is an effective method of performing statistical learning on large-scale relational data, such as knowledge graphs. In this paper, we consider the problem of regularizing the training of neural knowledge graph embeddings by leveraging external background knowledge. We propose a principled and scalable method for leveraging equivalence and inversion axioms during the learning process, by imposing a set of model-dependent soft constraints on the predicate embeddings. The method has several advantages: i) the number of introduced constraints does not depend on the number of entities in the knowledge base; ii) regularities in the embedding space effectively reflect available background knowledge; iii) it yields more accurate results in link prediction tasks over non-regularized methods; and iv) it can be adapted to a variety of models, without affecting their scalability properties. We demonstrate the effectiveness of the proposed method on several large knowledge graphs.Our evaluation shows that it consistently improves the predictive accuracy of several neural knowledge graph embedding models (for instance,the MRR of TransE on WordNet increases by 11%) without compromising their scalability properties.