Measuring accuracy of triples in knowledge graphs
View/ Open
Date
2017-06-19Author
Liu, Shuangyan
d’Aquin, Mathieu
Motta, Enrico
Metadata
Show full item recordUsage
This item's downloads: 6485 (view details)
Recommended Citation
Liu, Shuangyan , d'Aquin, Mathieu , & Motta, Enrico (2017). Measuring accuracy of triples in knowledge graphs. Paper presented at the First International Conference, LDK 2017, Galway.
Published Version
Abstract
An increasing amount of large-scale knowledge graphs have
been constructed in recent years. Those graphs are often created from
text-based extraction, which could be very noisy. So far, cleaning knowledge
graphs are often carried out by human experts and thus very inef-
ficient. It is necessary to explore automatic methods for identifying and
eliminating erroneous information. In order to achieve this, previous approaches
primarily rely on internal information i.e.the knowledge graph
itself. In this paper, we introduce an automatic approach, Triples Accuracy
Assessment (TAA), for validating RDF triples (source triples) in a
knowledge graph by finding consensus of matched triples (among target
triples) from other knowledge graphs. TAA uses knowledge graph interlinks
to find identical resources and apply di↵erent matching methods
between the predicates of source triples and target triples. Then based
on the matched triples, TAA calculates a confidence score to indicate
the correctness of a source triple. In addition, we present an evaluation
of our approach using the FactBench dataset for fact validation. Our
findings show promising results for distinguishing between correct and
wrong triples.